I am thinking of extending my storage and I don’t know if I should buy a JBOD (my current solution) or a RAID capable enclosure.

My “server” is just a small intel nuc with an 8th gen i3. I am happy with the performance, but that might be impacted by a bigger software RAID setup. My current storage setup is a 4-bay JBOD with 4TB drives in RAID 5. And I am thinking of going to 6 x 8TB drives with RAID 6 which will probably be more work for my little CPU

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Normally I would say software, or rather a raid-like filesystem like btrfs or ZFS. But in your specific case of funneleing it all through a single usb-c connection it is probably better to keep using an external box that handles it all internally.

    That said, the CPU load of software raid it very small, so that isn’t really something to be concerned with, but usb connections are quite unstable and not good for directly connecting drives in a raid.

    • BentiGorlich@gehirneimer.deOP
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      I mean I’ve been running the setup this way for >4 years and never had any problem with the USB connection, so I cannot attest to “usb connections are quite unstable”…

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        I supposed that is because the JBOD box was handling the raid internally so short connection issues are not that problematic and can be recovered from automatically. But that wouldn’t be the case if you connected everything together with a usb hub and usb to sata adapters and run a software raid on that.

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    just my 2 cents, if youre going to do raid, buy a thing that will do it…

    a nas or enclosure where the hardware does all the heavy lifting. do not build raided system from a bunch of disks… i have had, and have had friends have many failures over the years from those home brew raids failing in one way or another and its usually the software that causes the raid to go sideways… mayvbe shits better today than it was 10-20 years ago.

    its just off my list. i bought a bunch of cheap nas devices that handle the raid, and then i mirror those devices for redundancy.

    • Doubletwist@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Y’all must be doing something wrong because HW raid has been hot garbage for at least 20years. I’ve been using software raid (mdadm, ZFS) since before 2000 and have never had a problem that could be attributed to the software raid itself, while I’ve had all kinds of horrible things go wrong with HW raid. And that holds true not just at home but professionally with enterprise level systems as a SysAdmin.

      With the exception of the (now rare) bare metal windows server, or the most basic boot drive mirroring for VMware (with important datastores on NAS/SAN which are using software raid underneath, with at most some limited HW assisted accelerators) , hardly anyone has trusted hardware raid for decades.

        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          Y’all must’ve been doing something wrong with your hardware raid to have so many problems. Anecdotally, as an admin for 20+ years, I’ve never had a significant issue with hardware raid. The exception might be the Sun 3500 arrays. Those were such a problem and we had dozens of them.

          So what were you doing wrong to have so much trouble with the Sun 3500’s?

  • Paragone@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    I read somewhere, years ago, that RAID6 takes about 2 cores, on a working server.

    That may have been a decade ago, and hardware’s improved significantly since then.

    Bet on 1 core being saturated, min, with heavy use of a RAID6 or Z2 array, I suspect…


    I’d go with software raid, not hardware: with hardware RAID, a dead array, due to a dead controller-card, means you need EXACTLY the same card, possibly the same firmware-revision, to be able to recover the RAID.

    With mdadm, that simply isn’t a problem: mdadm can always understand mdadm RAID’s.

    _ /\ _