Hi, I got a tiny Lenovo M720Q (i5-8400T / 8RAM / 128NVME / 1Tb 2,5" HDD) that I want to set as my home server with the ability to add 2 more drives (for RAID5 if possible) later using its two USB 3.1 Gen 2 (10gbps).

  • The OS (debian 12 + docker) will be exclusive to the nvme, I will mostly use 40/128GB of its capacity with no idea how to make use of the rest.

  • My data (medias, documents and ISO files) will resides on the HDD pool, while keeping a copy of my docs on my home pc.

I read a bit about BTRFS RAID I even experimented with it in a VM and it really got me interested in using it because of its flexibility of balancing between raid levels and the hot swapping of unequally sized drives in both stripped and mirrored arrays. However, most of what I read online predate kernel 6,2 (which improved BTRFS RAID56 reliability). So, Here I am asking if anyone here is using BTRFS RAID and if it is stable enough to use on a mostly idle server or should I stick with LVM instead. What good practices to do or bad ones to avoid?

Thank you.

  • mee@programming.dev
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 year ago

    I would continue to say don’t use RAID56. You can use RAID1, which will give you the sum of all your drives divided by 2 in usable space. As long you’re not matching say a 4TB and 2x1TB. It’s called RAID1, but really it writes all data to 2 separate drives, that’s why the 4TB and 2x1TB example you don’t have enough to write more than 2TB on separate drives. https://www.carfax.org.uk/btrfs-usage/ is a calculator you can play with

    https://btrfs.readthedocs.io/en/latest/Status.html#block-group-profiles They still list RAID56 as unstable on the docs.

    • mhz@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thank you for the links, I will hold on using RAID and stick with BTRFS single until I upgrade my storage to higher capacity or my server to something with more reliable SATA slots

  • Yote.zip@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    You can also use MergerFS+SnapRAID over individual BTRFS disks which will give you a pseudo-RAID5/6 that is safe. You dedicate one or more disks to hold parity, and the rest will hold data. At a specified time interval, parity will be calculated by SnapRAID and stored on the parity disk (not realtime). MergerFS will scatter your files across the data disks without using striping, and present them under one mount point. Speed will be limited to the disk that has the file. Unmitigated failure of a disk will only lose the files that were assigned to that disk, due to lack of striping. Disks can be pulled and plugged in elsewhere to access the files they are responsible for.

    It’s a bit of a weird-feeling solution if you’re used to traditional RAID but it’s very flexible because you can add and remove disks and they can be any size, as long as your parity disks are the largest.

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I understood that software raid on USB is dangerous as sometimes the drives can get offline for a few seconds due to current fluctuations and then will lose the sync. Maybe it’s ok for files that don’t get accessed too often, like video file backups

    • poVoq@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      In my experience there are often issues with sata ssd over USB, but slower HDD seem to work fine. With btrfs I would set up a regular scrubbing job to find and fix possible data errors automatically.

      • Atemu@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        With btrfs I would set up a regular scrubbing job to find and fix possible data errors automatically.

        This only works for minor errors caused by tiny physical changes. A buggy USB drive dropping out and losing writes it claimed to have written can kill a btrfs (sometimes unfixably so) especially in a multi-device scenario.

        • poVoq@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          How so if the second drive in the raid1 retains a working copy and the checksum is correct? I have had USB drives drop out on me before for longer periods and it was never a problem after reconnecting them and doing a scrub.

          But of course raid is not a backup, so that is only the first line of defense against data loss 😉

          • Atemu@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            The problem is on the logic level. What happens when a drive drops out but the other does not? Well, it will continue to receive writes because a setup like this is tolerant to such a fault.

            Now imagine both connections are flakey and the currently available drive drops out aswell. Our setup isn’t that fault tolerant, so FS goes read-only and throws IO errors on read.
            But, as the sysadmin takes a look, the drive that first dropped out re-appears, so they mount the filesystem again from the other drive and continue the workload.

            Now we have a split brain. The drive that dropped out first missed the changes that happened to the other drive. When the other drive comes back, they’ll have diverged states. Btrfs can’t merge these.

            That’s just one possible way this can go wrong. A simpler one I allured to is a lost write where a drive will claim to have permanently written something but if power was cut at that moment and the same sector read upon restart, it will not actually be the new data. If that happens to all copies of a metadata chunk, good bye btrfs.

        • mhz@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          A buggy USB drive dropping out and losing writes it claimed to have written can kill a btrfs

          Mixing USB and SATA drives sounds like a very bad idea, I’m holding on using an array of drives connected using USB. hank you for your comment

          • Atemu@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            It’s not the mixing that’s bad, it’s using USB in any kind of multi-device setup or even using USB drives for active workloads at all.

  • joshfee@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    The stability issues with RAID5 on BTRFS tend to be blown out of proportion. I’ve been using it for my home server with mixed drives of ~50TB raw for about 5 years without any issues. I use it for the same benefits you’ve noted, mainly support for ad hoc expansion while still maximizing usable space. Until bcachefs is released BTRFS is the only filesystem I’ve seen with these features.

    That said, the mentioned stability issues, particularly the write hole, ARE real and possible. I wouldn’t use it in a commercial production scenario or for data I couldn’t stand to lose. Anything I care enough about has off-site backups, and my server runs on a UPS to mitigate issues from power outages. But for my purposes it’s worth the minor risk for the benefits.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 year ago

    There is little benefit using raid5/6 over using raid1 IMHO since you can quite easily match discs to utilize all the space as others have already mentioned.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Stability is not really a concern with BTRFS. Like, it works, and reliably, from what I’ve seen. But don’t use it for anything where you care about IOPS, and definitely don’t use it for RAID 5 until they solve the write hole problem.

    I would recommend ZFS instead (just make sure your drives aren’t SMR).

  • Atemu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I got a tiny Lenovo M720Q (i5-8400T / 8RAM / 128NVME / 1Tb 2,5" HDD) that I want to set as my home server with the ability to add 2 more drives (for RAID5 if possible) later using its two USB 3.1 Gen 2 (10gbps).

    Do not use USB drives in a multi-device scenario. Best avoid actively using them at all. Use USB drives for at most daily backups.

    I wouldn’t advocate for RAID5. I’d also advocate against RAID to begin with in a homelab setting unless you have special uptime requirements (e.g. often away from home for prolonged periods) or an insane amount of drives.

    I will mostly use 40/128GB of its capacity with no idea how to make use of the rest.

    I use spare SSD space for write-through bcache. You need to make the decision to use it early on because you need to format the HDDs with bcache beneath the FS and post-formatting conversions are hairy at best.

    most of what I read online predate kernel 6,2 (which improved BTRFS RAID56 reliability).

    Still unstable and only for testing purposes. Assume it will eat your data.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage

    3 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

    [Thread #270 for this sub, first seen 9th Nov 2023, 22:05] [FAQ] [Full list] [Contact] [Source code]