I tried btrfs with a single disk but got annoyed by how the compression and deduplication worked, two things I was particularly interested in. If you forced high compression and had a chunk of data it ground the CPU to its knees and affected usability of the system (IMO it should never let that happen, just use less resources and take longer dummy, maybe copy some of it uncompressed to compress later). And if you don't force compression and just let it do its thing it's not smart enough to compress the latter part of a file if somewhere in the middle it has incompressible bits that cause it to stop trying to compress. Apparently you can do a number of manual things to help like setting up subvolumes for different types of data but IMO that's too much to expect from a user.
Sometime this year I'll probably switch to bcachefs and see how that goes. ZFS is a non-starter for me because it's not in the mainline kernel.