All of my important data is on btrfs drives. I intend to install my system on ZFS. Why, you may ask? Because I can. That’s the fun of Linux after all. I intend to mount btrfs drives as well. I hear that ZFS can break fairly easily? Is this a bad idea?
Edit: I understand ZFS is out of tree but CachyOS maintains their own package and dkms so it shouldn’t matter I would think?
ZFS is magic and all the hate is from absolute idiots. It’s fancy features are no harder to use than BTRFS’s, and its resiliency is phenomenal with enough drives.
I was having constant IO errors and problems due to bad SATA cables/controllers for months, and it only halted my system maybe three times. It never failed to boot after each hard reset, and a scrub quickly corrected issues every time. For months.
I know it’s anecdotal, but I also know if it were any other file system running my proxmox, my VMs and perhaps even my main OS would’ve been irrevocably corrupted under any other file system over the months it took me to troubleshoot my physical controllers/cables.
It has never been difficult to take ZFS snapshots or restore, and I never had to use them to fix the thousands of detectable IO errors over those months, either. I’ve fallen in love with ZFS, and it has had nothing to do with the sales pitches I’ve heard over the years.
If CachyOS even supports it tacitly much like the now old version of proxmox I have installed does, it’ll be worth it. At least assuming you have enough redundancy in drives to take advantage. I have 12 HDDs controlled by two mirrored NVMe special devices (holds the file index and I think hashes?) with high write tolerance that were surviving all the IO errors. I’m sure that tolerance drops significantly with only a few SATA devices and no special devices, though IMO the algorithms have more than proven their worth.
It should be fine, but, why? ZFS isn’t a filesystem like btrfs, it’s a fully-integrated stack of filesystem, volume manager, and software RAID system. Using it on your OS drive is kind of like using a sword as a letter opener. Sure, it works, but that’s not what’s fun about it.
I disagree about it being a sword as a letter opener for an OS drive. An OS drive is where it shines, where you can rollback upgrades and corruption with snapshots, where large logs live compressed in storage without second thought, where storing two copies of critical OS files or mirroring across two drives defeats corruption from drive sectors going bad or in the later case prevents downtime and data loss from the OS drive dying.
If you want to argue it can be a headache to boot on ZFS on Linux I’ll agree but using its feature set to argue against it makes no sense.
It’s the default filesystem for the OS over in the BSD world, where reliability and stability surpasses linux
I’ve used btrfs for years. I kind of wanted to try using something different. I also want to learn more about ZFS.
Haha, so it turns out I was uninformed. Btrfs has all of that, too. TIL! Might as well use ZFS for the OS drive then in order to learn its commands.
I’ll be using it too. I’ll let you know how it goes.
In my limited experience running ZFS (daily driving FreeBSD with ZFS for two years, and also running Void Linux with ZFS for about a week quite recently), I always found it to be a nice filesystem that was decently fast and didn’t break. I currently have a ZFS partition on an external disk, but that’s just because I have some files with names longer than 255 bytes that I need to store, and I can’t really use ReiserFS anymore. However, for my root partition, I usually opt for XFS (or JFS, on older machines).
It’s also worth noting that my stack is weird. I tend to run well-known but less popular Linux distros on older hardware, normal usage sometimes breaks everything, and I could probably leave my laptop unlocked at uni because only about three other people know how to use my window manager. Also, the hardware itself has a habit of dying for a week or so and then miraculously self-repairing.
Haven’t used zfs so I can’t speak from the fs perspective but I can answer from dealing with a device with out of tree drivers perspective.
Quick answer, dkms should handle things.
Long answer, since it’s out of tree, kernel upgrades should be treated with care and you should make sure that the kernel version and the zfs version don’t have issues. Rare but possible. Dkms should handle things 95% of the time. Before putting data on it, try to fuck it up. Do some kernel upgrades see what the flow is. What’s the flow if something fucks up. Its better to have that in hand and tested and documented in a safe incase of shit folder.
I never keep important data on my OS drive. So my only worry with it breaking is the time I’d have to spend reinstalling and reconfiguring my system. Thank you!





