My life is generally Mac and Thunderbolt centric and I’m trying to bring it into the Linux world. The work flow on the Mac goes immediate storage goes onto a 2TB external NVME, then to one of the two drive 4TB drives on my OWC open dock, then moves into a 16TB soft RAID array, and backup is on a 16TB HW raid all are thunderbolt enclosures. The issue I have is that mdam doesn’t work on either my Trixie or 26.04 install so I can’t convert my existing soft RAID to anything readable.

Does anyone know of a 8 bay preferably thunderbolt hardware RAID enclosure that isnt’t 1000 dollars?

  • bizdelnick@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    mdam doesn’t work on either my Trixie or 26.04 install

    What? Do you mean it is incompatible with Mac OS software RAID implementation? This does not mean it doesn’t work. You will also be unable to read your software RAID if you insert disks into hardware RAID device.

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    10 hours ago

    There’s a few things at work here:

    1. Not much “hardware” RAID anymore because offloading works just fine and doesn’t draw excessive resources.
    2. It sounds like you want to just take your existing disks and pop them into something else, which won’t work.
    3. You shouldn’t be running RAID over any external connections for a number of reasons if the coordinator (your machine) is hosting it. I can go deeper into that if you want.

    You want a self-contained NAS that manages its own RAID and disks. I would honestly just get a diskless unit and start clean. You’ll be better off in the long run.

      • just_another_person@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        9 hours ago

        The main issue is statefullness of the host.

        Say you’re on a laptop, and you get an external JBOD box without any hosted controller. You use that laptop to setup a RAID1 array on 2 disks, and go about your business. Few weeks in you’re in the middle of some editing of video or whatever, and you have a power outage.

        That RAID array is assuredly damaged or dead. Your host machine being the controller in the middle of a write when the entire array dissapears is going to give up quickly, and the cached data in flux to write is gone. You miiight be able to recover the array if you’re lucky, but whatever you’re working on is gone.

        A number of diff5scenariis where this may happen exist without a power outage, but the problem is the target not being able to manage its own interrupt, and you have two different states in two different devices that won’t match. It’s toast.

        • Lemmchen@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          You miiight be able to recover the array if you’re lucky

          I don’t see how this would apply. Having the disks connected externally is the same as having them connected internally, maybe over a different bus/protocol, but the principle is the same. No RAID solution I know of would lose the array on a power outage (AFAIK).

          the problem is the target not being able to manage its own interrupt

          Honestly I don’t see how interrupt handling would be any different between internally or externally connected devives, except for different buses/protocols handling it differently intrinsicly. Are you absolutely sure this is a thing or are you just speculating?

          you have two different states

          Maybe I’m too spolied by using ZFS, but again I don’t think this would actually be a problem. But AFAICT you don’t even need a CoW filesystem for that to be not a problem. Every journaling filesystem (e.g. ext4) would solve this by dismissing the newest non-consistent data and restore a working state.

          I mean, there are 60-bay 19" expansion units for enterprise storage systems. I doubt these would be a thing if having the drives connected externally was a problem.