Examining btrfs, Linux’s perpetually half-finished filesystem

Where are we at with Btrfs vs ZFS in 2024 as i've not seen anyone argue about in online for ages?

Ubuntu has reinstated ZFS again as experimental on the installer.
 
I'm using it on an arch install on my main pc and it seems stable enough. I'm not a power user by any means though so as long as files don't corrupt or disappear then I'm fine with it. I think i read somewhere about better perf (maybe marginal but hey I thought try the new thing) vs zfs so that's why I went for it.
 
Where are we at with Btrfs vs ZFS in 2024 as i've not seen anyone argue about in online for ages?

Ubuntu has reinstated ZFS again as experimental on the installer.
So my understanding is that if you’re using it in a ‘single’ disk setup (as in not RAID) then it’s pretty stable, and that has certainly been my experience running it on Fedora.

RAID on the other hand is still terrifying and I personally would be hesitant to risk it (unless the data has no importance or you have exceptionally good backups)

Overall though, unless the licensing is a factor for you, I would err towards ZFS if I had the choice between the two
 
When I used to use Linux as my main OS I used OpenSUSE Tumbleweed as my Linux distro and it uses btrfs by default. It was great being able to rollback changes if anything went wrong especially as it was a rolling release distro.
 
Used BRTFS for a while now on Debian and never had any issues. I probably won't use ZFS on Linux as it very much feels like a second-class citizen compared to the native ZFS on FreeBSD in my opinion.
 
I tried btrfs with a single disk but got annoyed by how the compression and deduplication worked, two things I was particularly interested in. If you forced high compression and had a chunk of data it ground the CPU to its knees and affected usability of the system (IMO it should never let that happen, just use less resources and take longer dummy, maybe copy some of it uncompressed to compress later). And if you don't force compression and just let it do its thing it's not smart enough to compress the latter part of a file if somewhere in the middle it has incompressible bits that cause it to stop trying to compress. Apparently you can do a number of manual things to help like setting up subvolumes for different types of data but IMO that's too much to expect from a user.

Sometime this year I'll probably switch to bcachefs and see how that goes. ZFS is a non-starter for me because it's not in the mainline kernel.
 
Sometime this year I'll probably switch to bcachefs and see how that goes. ZFS is a non-starter for me because it's not in the mainline kernel.
If being mainline is important to you I’d be watching the bcachefs ‘drama’ closely before committing to anything…
 
If being mainline is important to you I’d be watching the bcachefs ‘drama’ closely before committing to anything…
I am, but also don't buy into the drama so much. I'm just giving bcachefs enough time to cook before taking a bite.
 
I tried btrfs with a single disk but got annoyed by how the compression and deduplication worked, two things I was particularly interested in. If you forced high compression and had a chunk of data it ground the CPU to its knees and affected usability of the system (IMO it should never let that happen, just use less resources and take longer dummy, maybe copy some of it uncompressed to compress later). And if you don't force compression and just let it do its thing it's not smart enough to compress the latter part of a file if somewhere in the middle it has incompressible bits that cause it to stop trying to compress. Apparently you can do a number of manual things to help like setting up subvolumes for different types of data but IMO that's too much to expect from a user.

Sometime this year I'll probably switch to bcachefs and see how that goes. ZFS is a non-starter for me because it's not in the mainline kernel.
Hmmm, unless I've missed something it still doesn't actually support any sort of inline deduplication, though there are external projects that can scan files and use the "same_extent" IOCTL to make the filesystem aware it can drop one of the copies of the data. Did you mean the copy-on-write snapshots and file copies?

When it comes to compression, it compresses in blocks and as a heuristic if the first block doesn't compresses then it doesn't bother with it for that file. Like you say, not ideal sometimes but given the block size was pretty big (like 128k I want to say?), it did a fairly okay job. What did you set the compression settings to, I'd recommend leaving it on the defaults - you won't save much more space setting it crazy high (or picking a higher compression ratio algo) and as you say things will just get unusable.
 
You haven't missed online dedupe I was messing around with offline dedupe, it's been a while can't remember exactly what (probably these tools: https://btrfs.readthedocs.io/en/latest/Deduplication.html ). The compression settings were set to strong probably zstd 15 the default is 3 I believe, 128KiB blocks sounds right. tl;dr I wanted strong transparent compression+dedupe for a read-only set of related images with random access requirements. Turns out btrfs was unsuitable for that use case (compression performed terribly, dedupe was weak) so ended up making my own format and a FUSE program for transparent access.

I'm still interested in btrfs/bcachefs but for a normal use case of CoW, checksums and compression on a single disk. Whenever bcachefs seems in good shape I'll probably try both at default settings and see which makes the cut.
 
I've been using it as my main filesystem (I have Fedora on almost every device and it uses it by default anyway) for.. about 4 years and in that whole time I only had an issue once, turns out it doesn't play nicely if your CO/undervolt is too much, like it wouldn't boot up and just threw errors, I had to re-adjust the CO and it just went fine, that was about couple of years ago.
 
Last edited:
I've been using it as my main filesystem (I have Fedora on almost every device and it uses it by default anyway) for.. about 4 years and in that whole time I only had an issue once, turns out it doesn't play nicely if your CO/undervolt is too much, like it wouldn't boot up and just threw errors, I had to re-adjust the CO and it just went fine, that was about couple of years ago.
How silly of me you're right. Turns out I'm on btrfs on fedora right now and haven't noticed any issues, so I guess that answers if a CoW FS can transparently replace ext4 for an OS drive. No dedupe and the default compress setting for btrfs from fedora installer is apparently zstd 1. Which makes sense for an OS-drive that's probably dealing with sata SSD at minimum, probably fast NVMe SSD, heavy compression would be worse performance.

Being on btrfs already with no issue begs the question why consider bcachefs. This benchmark ( https://www.phoronix.com/review/linux-611-filesystems ) tl;dr is that bcachefs is better at random small IO and that btrfs is better at large sequential writes. Once bcachefs is stable for some arbitrary definition of stable it might technically be better for an OS-drive, which arguably has to do a lot of small file things, but the margin might be small.
 
Back
Top Bottom