About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.
Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.
Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.
Anyone else pondering or using btrfs? It seems like a solid choice.
No reason not to. Old reputations die hard, but it’s been many many years since I’ve had an issue.
I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.
I’ll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.
I’ve been vaguely planning on using btrfs in raid5 for my next storage upgrade. Is it really so bad?
It’s affected by the write-hole phenomenon. In BTRFS case that can mean that perfectly good old data might corrupt without any notice.
Check status here. It looks like it may be a little better than the past, but I’m not sure I’d trust it.
An alternative approach I use is mergerfs + snapraid + snapraid-btrfs. This isn’t the best idea for a system drive, but if it’s something like a NAS it works well and
snapraid-btrfs
doesn’t have the write hole issues that normalsnapraid
does since it operates on r/o snapshots instead of raw data.