I’m installing 3x2TB HDDs into my desktop pc. The drives are like-new.

Basically they will replace an ancient 2tb drive that is failing. The primary purpose will basically be data storage, media, torrents, and some games installed. Losing the drives to failure would not be catastrophic, just annoying.

So now I’m faced with how to set up these drives. I think I’d like to do a RAID to present the drives as one big volume. Here are my thoughts, and hopefully someone can help me make the right choice:

  • RAID0: Would have been fine with the risk with 2 drives, but 3 drives seems like it’s tempting fate. But it might be fine, anyhow.
  • RAID1: Lose half the capacity, but pretty braindead setup. Left wondering why pick this over RAID10?
  • RAID10: Lose half the capacity… left wondering why pick this over RAID1?
  • RAID5: Write hole problem in event of sudden shutoff, but I’m not running a data center that needs high reliability. I should probably buy a UPS to mitigate power outages, anyway. Would the parity calculation and all that stuff make this option slow?

I’ve also rejected considering things like ZFS or mdadm, because I don’t want to complicate my setup. Straight btrfs is straightforward.

I found this page where the person basically analyzed the performance of different RAID levels, but not with BTRFS. https://larryjordan.com/articles/real-world-speed-tests-for-different-hdd-raid-levels/ (PDF link with harder numbers in the post). So I’m not even sure if his analysis is at all helpful to me.

If anyone has thoughts on what RAID level is appropriate given my use-case, I’d love to hear it! Particularly if anyone knows about RAID1 vs RAID10 on btrfs.

  • notagoblin@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    47 minutes ago

    I built a franken-server from some bits awhile ago using old disks of different sizes.

    The disks were formatted individually with BTRFS (Mostly any FS would do) and then a mergerfs pool setup across all the drives. I got full use of all my old drives. Sizes varied between 1TB and 6TB, gave me 20Tb of usable space.

    Mergerfs has no redundancy and no parity although it is designed to work with snapraid.

    If a drive is lost only the data on that drive is lost, you just need to replace that one drive, the data on the other drives remains intact. The loss of a drive in a Raid0 or JBOD array, results in the loss of the whole array and all the data.

    Some sort of backup is probarbly appropriate.

    https://trapexit.github.io/mergerfs/latest/

    Might be worth a look.

  • gravitas@lem.ugh.im
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 hours ago

    I run an 8 disk 90tb BTRFS RAID0 and its been going strong for over a year. I hesitate to actually encourage anyone to live this dangerously but its mainly a media server and if i needed to i could restore any lost data from usenet or torrents using my arr stack which is on a different drive. If you can handle the risk its a nice speed boost.

  • kaki@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 hours ago

    I setup BTRFS RAID1 on my NAS recently, here’s what I gathered on BTRFS RAID levels:

    • RAID1 vs RAID10:
      • RAID10 theoretically offers better performance (stripping), but the BTRFS implementation isn’t well optimized so you don’t get that much of a performance boost compared to RAID1.
      • RAID1 may also waste less disk space when using disks of different sizes compared to RAID10, here’s a simulator to check: https://www.carfax.org.uk/btrfs-usage/ . In your case it doesn’t matter since all your disks are the same size.
    • RAID5/6:
      • It’s not only the write-hole, scrubbing is also broken (huge read amplification make performance unusable on large file systems).
      • RAID5/6 is slower than everything else (not a BTRFS problem, it’s just parity calculations).
      • I believe the BTRFS developers are getting (relatively) closer to fixing RAID5/6 these days, but they still don’t recommend using it. Switching between BTRFS RAID modes can be done pretty easily later without creating a new filesystem, so you could always switch to it later once it’s fixed.

    In your case, I’d probably use RAID1 or RAID10 for data, and RAID1c3 for metadata.

    EDIT: just saw that you don’t care that much about losing the data. In that case you don’t need any fancy RAID thing, as someone else said just add all your disks in single mode (the default), way simpler and you don’t lose any disk space.

    • GnuLinuxDude@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      That’s a good point about scrubbing on RAID5. I don’t think I really want to spend time on that ever. RAID1 at least sounds less complex both in terms of setup and down-the-line maintenance.

  • MentalEdge@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 hours ago

    If you just want to combine their capacities, and don’t need redundancy, just use single mode?

    No need to use a raid mode for multi-device btrfs.

    Edit: You could also do two volumes.

    Split each drive in half. Use the first half of each drive for a raid1c2 volume to get 1.5TB of redundant storage for important data.

    Use the second half of each drives for a raid0 volume to get 3TB of faster storage for games.

    • GnuLinuxDude@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      I was unfamiliar with single mode. What advantage does it give me over RAID0 in terms of combining their capacities?

      • MentalEdge@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        It combines capacity without any fancy striping. It can still provide some performance benefit as different blocks of the same file can be stored on different drives, but it doesn’t stripe data across the drives for performance.

        It also allows you to just add more drives later. The drives don’t need to be the same size or type. You can also remove drives, provided there is enough free space to move the data on a drive to the ones that will remain.

        It really just pools the storage capacity into one big volume.

        If a drive fails, it still takes the whole volume with it tho. But as long as you monitor smart, it is fairly simple to try ejecting it from the device group before it takes the whole thing with it.

  • Svinhufvud@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 hours ago

    With 3 drives I would go with BTRFS RAID1. You get half the capacity, can lose any one of the drives, and it is really simple.

    If you were to add drives later on to increase the array size, I might consider RAID10, but if you plan on sticking with 3 drives, I would go with RAID1 instead.

    RAID0 offers no redundancy, so I would skip it.

    RAID5/6 are not recommended for use in production.

  • antsu@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    Briefly addressing the RAID types you mentioned:

    • RAID-0: OH NO OH GOD PLEASE NO AHHHHHHH
    • RAID-1: There’s nothing wrong with it, but it feels very weird to me that BTRFS can do RAID-1 over 3 disks. It’s still technically 2 copies of every block, meaning you can in theory lose any single drive and still recover the data, but idk, it just feels wrong.
    • RAID-10: Again, weird that BTRFS allows this with an odd amount of disks. From what I can find, this has no advantage over RAID-1 with 3 disks and can in fact perform worse.
    • RAID-5: It’s probably fine if you have backups, and you get the most usable space out of your disks, but performance will take a hit, and scrubs are terribly slow.

    Now, you mentioned not wanting ZFS due to complexity, but really, it is no more complex to manage than BTRFS. It’s fairly easy to get it working on any modern Linux distro (Ubuntu has support out-of-the-box, Debian has it packaged as a DKMS module, Arch has it in AUR, and so on).

    With ZFS, you could create a RAID-Z1 (equivalent to a RAID-5) without any of the performance penalties or risks that BTRFS RAID-5 has. Both have pretty much them same features, with the main difference that ZFS can’t be “re-balanced” to a different disk layout like BTRFS can, but it will also generally not corrupt your data if you look at it wrong. Everything else maps pretty much 1:1 between them. Both support:

    • Transparent compression
    • Transparent checksums
    • Scrubbing
    • Snapshots
    • Block-level transfers (btrfs-send / zfs-send)
    • Fairly intuitive CLI
    • Filesystem (BTRFS) == Pool (ZFS)
    • Subvolume (BTRFS) == Dataset (ZFS)
    • GnuLinuxDude@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      My concern with ZFS is I use Fedora, so the kernel updates really frequently. I know that it kicks ass, but I just like having it straightforward in my kernel that I already have installed so that I never have to deal with a

      If kernel module can not be loaded, your kernel version might be not yet supported by OpenZFS. An option is to an LTS kernel from COPR, provided by a third-party. Use it at your own risk:

      situation. (https://openzfs.github.io/openzfs-docs/Getting Started/Fedora/index.html)

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    Setup a raid5 with mdadm, format that with btrfs.

    Using btrfs for raid is not recomended by btrfs itself.

    But setting up a raid5 mdadm array is very simple and let you use btrfs.

    I agree that zfs is too much.

    Edit: with three drives, raid1 doesn’t make sense. Raid10 i never seen it in over 20y being used by anybody.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      5 hours ago

      with three drives, raid1 doesn’t make sense

      In raid1c2 mode btrfs will give 3TB of usable storage with 3x2TB. It always stores two copies on two drives. Not three.

    • GnuLinuxDude@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      with three drives, raid1 doesn’t make sense

      It’s perfectly usable with a btrfs setup. If one drive fails, you can mount in a degraded state.

      • Shimitar@downonthestreet.eu
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        If you mean using three disks as two by wasting 66% of space to keep a spare drive… I think it’s a waste on a home pc

        If you mean to use three disks to get 6tb/2=3tb of available space, then it’s something I only recently got aware of (see previous comment to my comment) so yes.