The title says basically everything but let me elaborate.

Given the recent news about the sold out of harddrives for the current year and possibly also the next years (tomshardware article) I try to buy the HDDs I want to use for the next few years earlier than expected.

I am on a really tight budget so I really don’t want to overspend. I have an old tower PC laying around which I would like to turn into a DIY NAS probably with TrueNAS Scale.

I don’t expect high loads, it will only be 1-2 users with medium writing and reading.

In this article from howtogeek the author talks about the differences and I get it, but a lot of the people commenting seem to be in a similar position as I am. Not really a lot of read-write load, only a few users, and many argue computing HDDs are fine for this use case.

Possibilites I came up with until now:

  1. Buy two pricey Seagate Ironwolf or WD Red HDDs and put them in RAID1
  2. Buy three cheaper Seagate Barracuda or WD Blue and put two in RAID1 and keep one as a backup if (or should I say when?) one of the used drives fails.

I am thankful for every comment or experience you might have with this topic!

  • eleitl@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 hours ago

    Use zfs. Use RAID-suitable (TLER) drives which limit retries before reporting failure. Use stripe over mirror pools (RAID10 equivalent). Buy a spare in advance. Hot spares are immediately starting rebuilding after failure. Scrub regularly (crontab). Always remember: RAID is not a backup.

    • theorangeninja@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      I think there are too many technical words I don’t understand yet in this comment, but thanks nonetheless. One I want to ask specifically tough, is a hot spare a disk mirrored every now and then or what do you mean by that?

      • eleitl@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 hours ago

        I can explain in more detail later.

        A hot spare is a spinning disk that is known to the system and is automatically added to a RAID/pool when a disk there fails, and then triggers rebuild/resilvering of the RAID.

        A cold spare is a disk added manually by the user.

  • Nollij@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 hours ago

    If you can use SAS (you’ll need a SAS PCIe card, roughly $50 used), get SAS drives. They are enterprise-grade exclusively, there is a massive supply of used drives as servers get refreshed, and a very limited secondhand market because most people can’t use SAS drives.

    You won’t get the latest or largest drives, but you’ll get something that works perfectly fine for home use.

    • Evil_Shrubbery@thelemmy.club
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 hours ago

      Yes, but enterprise grade stuff on second hand market is basically always fairly priced (you don’t get a “good deal”, just a normal one).

      That said, I would still rather go refurnished server disks than desktop, especially lower capacities.

        • Evil_Shrubbery@thelemmy.club
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 hours ago

          Also SATA. There’s basically no difference.
          (We don’t have all the same needs, so no right or wrong answers. All my current HDDs are SATA, which saves me one smol hassle in potential migrations/changes/salvages/troubleshooting, but nothing major. Perhaps they were even a little tiny bit cheaper new than the same with SAS, but that’s not a rule. I do have two SAS SSDs bcs they looked cool & I wanted to have them even after they become obsolete.)

          Oh, and to clarity the mythos around why “enterprise” in cases when the hardware seems the same CMR - this night not be true, but I feel some components (eg on the board) might be better grade or suited by design to work 24/7 or survive hours of some work intensive load (like ZFS repairs). And finding data/reviews on such stuff whilst having other priorities/restraints (loudness, price) is hard.

          • Nollij@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            It’s not really about 24/7, but it is about quality of components. Enterprise gear is made using slightly better parts and tighter tolerances. Things like more expensive capacitors rated for more hours/cycles, better power filters, things like that.

            The end result (and this is easily verified) is the failure rate is much, much lower than comparable consumer-grade equipment.

            There is sometimes a blurry line between what counts as enterprise vs pro-sumer vs consumer gear, though.

      • Nollij@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        This is highly dependent on what your needs are and how you plan to solve it. SATA-3 maxes at 6gbit, which SAS-2 had in 2009. Most cards are x8, and have at least 4 full speed SAS lanes (of whatever generation). That means 24 Gbit. PCIe x8 2.0 (from 2007) had 4 GB (32 Gbit). So if that meets your needs, you can run it on an ancient board.

        However, if you need something more advanced, such as SAS-3, a SAS expander, or a card with more native lanes, then you would need to plan accordingly.

        I’ve been running on an LSI 9211-4i4e, which is only a PCIe 2.0 card, for many years. I did notice my speeds dropped when I expanded the 4e to a 15-bay DAS (plus the 4 internal SATA drives), but it’s still enough to meet my needs.

        • theorangeninja@sopuli.xyzOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          I will definitely have to do my homework on all of these transfer rates lol

          SAS drives are way cheaper on eBay and such but I don’t know if I feel confident enough to move from SATA to SAS for my first DIY NAS already. It sounds very easy with the PCIe SAS adapter but still daunting at the same time.

      • prenatal_confusion@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Pcie is ridiculously fast. I think you will have a hard time making out even a 1x pcie 4.0 slot with hdds. It can do 1.9 gbytes per second.

        • theorangeninja@sopuli.xyzOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          idk which naming scheme is worse: PCIe or USB. Not that I would know too much about transfer speeds but the plethora of different version surely doesn’t help.

  • UnfortunateShort@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 hours ago

    I bought server drives because the price/TB was lower, and I want to give you just one hint: Server drives tend to be very loud

    • Evil_Shrubbery@thelemmy.club
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Just fyi in case it helps someone, from the “basically enterprise” level only Exos & Red Plus (not Pro) are “quiet” in a rubber caddy (bcs HDDs acoustics come from the drive as well as vibrations passed along the rig). But the later are very smol.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    12 hours ago

    First and most important:

    In the context of long-term data storage
    ALL DRIVES ARE CONSUMABLES

    I can’t emphasize this enough. If you only skim the rest of my post, re-read the above line and accept it as fundamental truth. “Long-term” means 1+ years, by the way.

    It does not matter what type of drive you buy, how much you spend on it, who manufactured it, etc. The drive will fail at some point, probably when you’re least prepared for it. You need to plan around that. You need to plan for the drive being completely useless and the data on it unrecoverable post-failure. Wasting time and money to acquire the fanciest most bulletproof drives on the market is a pointless resource pit, and has more to do with dick-measuring contests between data-hoarders.

    Knife geeks buy $500+ patterned steel chef’s knives with ebony handles and finely ground edges and bla bla bla. Professional kitchens buy the basic Victorinox with the plastic handle. Why? Because they actually use it, not mount it on a wall to look pretty.

    The knife is a consumable, not an heirloom. So are your storage drives. We call them “spinning rust” for a reason.

    The solution to drive failure is redundancy. Period.

    Unfortunately, this reality runs counter to the desire to maximize available storage. Do not follow the path of desire, that way lies data loss and outer darkness. Fault-tolerant is your watchword. Component failure is unpredictable, no matter how much money you spend. A random manufacturing defect will ruin your day when you least expect it.

    A minimum safe layout is to have 2 live copies of data (one active, one mirror), hot standby for 1 copy (immediate swap-in when the active or mirror fails), and cold standby on the shelf to replace the hot standby when it enters service.

    Note that this does not describe a specific number of disks, but copies of data. The minimum to implement this is 4 disks of identical storage capacity (2 live, 1 hot standby, 1 on the shelf) and a server with slots for 3 disks. If your storage needs expand beyond the capacity of 1 disk, then you need to scale up by the same ratio. A disk is indivisible - having two copies of the same data on a disk does not give you any redundancy value. (I won’t get into striping and mucking about with weird RAID choices in this post because it’s too long already, but basically it’s not worth it - the KISS principle applies, especially in small configurations)

    This means you only get to use 25% of the storage capacity that you buy. Them’s the breaks. Anything less and you’re not taking your data longevity seriously, you might as well just get a consumer-grade external drive and call it a day.

    Buy 4 disks, it doesn’t matter what they are or how much they cost (though if you’re buying used make sure you get a SMART report from the seller and you understand what it means) but keep in mind that your storage capacity is just 1 of the disks. And buy a server that can keep 3 of them online and automatically swap in the standby when one of the disks fails. Spend more money on the server than the disks, it will last longer.

    Remember, long-term is a question of when, not if.

    • theorangeninja@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Thanks a lot for this very in depth response! I had to take some time to sit down and properly understand everything so sorry for the late response.

      I will take your advice seriously and buy at least three drives of the same size (to get this right, the model or brand doesn’t necessarily need to be the same right?). Because I don’t know if I can afford four of the size I would like to get. One question I still have is, how do you achieve the hot standby? Is this a software feature common NAS OS provide?

    • SkyNTP@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      10 hours ago

      I enjoyed the depth of this answer. That being said…

      4 copies seems like a level of paranoia that is not practical for the average consumer.

      3 is what I use, and I consider that an already more advanced use case.

      2 is probably most practical for the average person.

      Why do I say this? The cost of the backup solution needs to be less than the value of the data itself x the effort to recover the incrementally missing data x the value of your time x the chance of failure.

      In my experience, very few people have data that is so valuable that they need such a very thorough backup solution. Honestly, a 2$ thumb drive can contain most of the data the average user would actually miss and can’t easily find again scouring online.

      • GnuLinuxDude@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 hours ago

        Absolutely correct. I used to maintain vigorous whole disk backups, and made sure my MacBook also had regular Time Machine backups and that kind of thing.

        Then I realized there are actually tiers of important data. The most important stuff would be on the order of megabytes (tax documents, my lease, historical records of that stuff, and config files that I’ve built up over time).

        Then I have my vacation photos and videos. Family photos. A few gigabytes. That’s not that much in the grand scheme and it’s still easy to back these up to a cloud service for minimal to no cost.

        The rest of the data on my computer is easily recoverable or can be reconstructed with minimal effort. The OS install. The games. Media from online. I would not bother backing up this stuff.

        Once this stuff is in perspective it’s very easy to devise a backup solution that fits your needs at an appropriate price. Not everyone has usage like mine and maybe their important data is much larger than mine is, but the point is we should think about which of the data is actually important, and not blindly duplicate pointless data.

  • Clearwater@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 hours ago

    In my experience, all that truly matters is that the drive is on the right recording technology (CMR, SMR, and maybe someday HAMR will be in the hands of us consumer plebs).

    There are two reasons to care:

    1. SMR has horrible write speeds. Data can read off the drive at the same speed as a CMR drive, but writes will be unbelievably slow.
    2. More importantly, for some reason or another (I assume the write speed), SMR drives might get rejected by ZFS. There was some pretty loud talk about it several years ago, but I haven’t heard much since and do not know if this is still true (I assume it is).

    If your use case involves only ever writing a small amount of data, point 1 doesn’t matter very much. If you’re using software which doesn’t care about CMR/SMR, point 2 doesn’t matter very much.

    If either point 1 or 2 matter to you, then you should go with CMR drives. If neither matter, you may go with SMR drives if you so chose.

    PS: Both WD Blues and Seagate Barracudas are (often) CMR. Seagate consult this page: https://www.seagate.com/products/cmr-smr-list/. WD lists SMR/CMR on their website when you look up the part number.


    In my home NAS, I use ZFS and have ran all sorts of drives through it. It’s ran old consumer drives I’ve pulled out of scrap hardware, it’s ran NAS-grade drives, and it’s ran enterprise-grade drives… And since they’re all CMR, I can’t say there was much if any difference at all.

    The only difference between the tiers that I find interesting/useful is the number of metrics you can pull off the drive. The fancier ones spit more metrics which could help you detect signs of failure earlier, but that requires knowing what to look for.

    So at the end of the day, as long as the drive’s recording technology works with your software, you’re fine.


    RE: External drives (seen in a comment)

    External drives can be a great way to get disks for cheap, however they are loot boxes. What drive you get inside of them depends on the capacity, the manufacturer, and pure luck. You can generally look up the model number and see what people have said is inside, then hope you get whatever they got. (Generally, manufacturers don’t often change what they put in there, but they do change over time.)

    • theorangeninja@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      43 minutes ago

      Thanks a lot for this elaboration!

      Unfortunately, like mentioned in another comment below, Barracudas are now mostly SMR. And I didn’t find offers for Barracudas Pro which should be CMR.

      Anyways, I will most probably buy some smaller CMR drives. Take a bit more money in the hand now, lay out a proper backup strategy and only store data which can not be easily re-downloaded and I should be good.

    • Sockenklaus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Both WD Blues and Seagate Barracudas are (often) CMR.

      I don’t know about WD Blue but modern Barracudas (not Pro!) use SMR.

      But anyway, I wanted to add a thought regarding SMR vs. CMR: It’s true that SMR has inferior write speed compared to CMR and that you can experience the effect even after writing a few gigabytes. I don’t know if I would call it unbelievably slow though: When writing to SMR drives, I experienced write speeds around 30 MiB/sec which is slow but considering you may be writing to a NAS that is only connected to a 1 Gbps network it is only around 30 % of the write speed you may reach with proper drives. It’s slow but it gets the job done when you’re not in a hurry and have a tight budget.

      Also there are other possible bottlenecks you may encounter: I for example built my homeserver with used enterprise drives in mind and therefore opted for software RAID 6 for double the fail-safety. Turns out that writing to that array is so heavy on my servers CPU that it throttles writing to almost the same point as SMR drives which defeats the whole point of using enterprise drives. 🤣 This may not be a problem for OP because they wrote about buying 2 or 3 drives but everyone should always consider the whole system and not single components.

  • fozid@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 hours ago

    Backups and backups and backups, and then and only then can you trust your data is safe. I run all consumer disks, have 2x2tb ssd’s in a raid1 for user facing storage which are always powered up and mounted. I only have 2 users in total, so relatively light load. Then I have 2x8tb hdd’s which only power on once a day at most, for as long as it takes rsync to complete it’s nightly backups, then they dismount and power off. Been running this for 18 months and not had any issues. My hdd’s will last years with their current load and usage, with only probably a few hundred mb written every night. But if your data is managed and backed up sensibly, and you use raid effectively, cheap discs aren’t a worry.

    • theorangeninja@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Very interesting take, if only SSDs would not be so expensive right now, damn!

      But I should probably worry a lot more about my backup strategy than the drives. You are correct

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 hours ago

    I buy consumer grade drives. I’ve had some fail over the years. It’s inevitable no matter if you spend money on consumer grade or server grade. @NaibofTabr@infosec.pub @infosec.pub pretty much gives a succinct breakdown of the situation further down in this thread. I’m pretty fastidious about backing up my data. I’ve been burned a couple times, nothing earth shattering, but it’s enough to do a proper 3,2,1 scenario for all data. And I treat my drives well. Each drive bay has it’s own cooling fan. I keep an eye on the S.M.A.R.T status as well. But even all of that isn’t going to save you from a crash.

    Recently, I tried to revive a friend’s 4 tb external he stored all his pictures on. He is a photographer. The first thing I asked was, where’s your backup? What backup? Well, he’s now got a 4 bay NAS, doing RAID, and backing up nightly in a 3,2,1 schema. It just takes once for most people.

    • Evil_Shrubbery@thelemmy.club
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      My last HDD failed in ATA (so consumer stuff, the legendary Deathstar) era, not one in 3 home NAS systems - I’m just saying your millage can very & is luck based.

      Also RAID will have more reads/writes than non-RAID systems.

      • paper_moon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        I think this stuff sort of depends on how often you upgrade drives. I bought 2 4TB drives in 2015 running in a ZFS mirror, spinning 24/7 as I had heard that the hardest time on a spinning disk is the initial spin up from cold boot, or sleep. (I’m not sure is this if true anymore, but I had disabled sleep on the drives, regardless)

        5 years later, I bought 2 10TB drives to upgrade my storage capacity, and relocated the 4TB mirror to media content, and stuff that was replaceable if the drives failed, so I didn’t need to really back it up.

        Juuust now, at the end of 2025, 1 of the initial 4 TB drives failed and now my ‘old’ ZFS mirror is in a degraded state running on 1 drive, but the drive that failed lasted 10 years.

        I bet the average home lab or self hoster is probably upgrading and replacing their drives with higher capacity more often than 10 years, so they probably would never actually see a drive fail in real life use.

        • Evil_Shrubbery@thelemmy.club
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 hours ago

          Yes, these are also exactly my thoughts (& also why it’s prob fine to recommend lightly used derives - but maybe not 10 year old drives).

  • glizzyguzzler@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    17
    ·
    14 hours ago

    Now is a bad time to buy hard drives price-wise. Massive price gouging going on with all storage pre-sold based on IOUs to “AI” companies.

    If you must…

    Buy used enterprise drives with a ~5 year warranty. In US there is serverpartdeals and goharddrives. I am not sure of the Europe equivalents but I am sure they exist. The enterprise drives should be cheaper than new drives and will last longer; they’ve been used out of their early failure bathtub curve but they’re young enough to be given a 5 year warranty. Make sure to get ones with SATA connectors not SAS, you’ll need a PCIe card to talk to the SAS ones, and maybe something for power idk.

    They should be cheaper - I am not sure if price uncertainty has upended that.

    Enterprise drives are louder, I have them in a quiet case with sound dampening padding (fractal define) and I do not hear them 5 feet away.

    I have heard bad things about consumer drives longevity. I used several 1 TB barracudas for years with no issues in a server setting, I used 3 TB barracudas in a server setting and one failed early. I used a 4 TB Toshiba that failed early and I used an 8 TB blue that is fine in a personal computing setting. I have bought enterprise drives and none have an issue yet.

    It seems luck of the draw, so the thing to maximize is cheapest per GB.

    • Nollij@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      I recommend against Go Hard Drives. They get drives that previously failed but currently test ok, then wipe the SMART data. I had a whopping 133% failure rate (all 3 original, plus 1 replacement) before I returned the whole thing.

      If you insist on using them, do the most extensive burn-in testing you possibly can. I would use at least a full week, to make sure it’s actually (semi-) reliable.

      • glizzyguzzler@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        I got several from them and they’ve been fine for a year now - and theoretically have a 5 year warranty from them too. So worked out for me to save some cash! Buuuut if they do end up failing, it’s gonna be a hassle to get replacements for sure

    • theorangeninja@sopuli.xyzOP
      link
      fedilink
      arrow-up
      5
      ·
      13 hours ago

      I know it is a bad time but I planned to stick with my normal external consumer harddrive for another year at least. Should I wait and just hook up an external USB drive to my RPi and use it as a samba share for backups?

      I checked for used enterprise drives but I didn’t find anything with SATA for a reasonable price yet, unfortunately.

      • glizzyguzzler@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 hours ago

        Consumer is fine then, cheapest you can. Edit: I did see people mention SMR drives, get cheapest CMR drives. SMR is not worth the money saved for usual use cases.

        You can def wait, but do the over-under with what you can pay. External drives, even if shucked, seem to be the lowest quality drives and die earliest. May be better to get real drives now, even with inflated costs.

        Make sure you get a drive for backup. Extra layout up front but worth it. I’d recc 1 data drive + 1 backup drive over just 2 raid1 data drives any day.

        • theorangeninja@sopuli.xyzOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          Yes I will definitely get a CMR drive, I read enough comments warning me about SMR lol

          Yeah I should probably get a little bit extra money in hand but buy something good instead of some garbage.

          Would you consider that better even if the backup drive is in the same house as the data drive, just powered off?

          • glizzyguzzler@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 minutes ago

            Backup drive doesn’t need to be anything more than holding your (ideally daily) backup of your main drive(s). It doesn’t need to be powered up and spinning all the time, it can be in the same computer. Spinning up and down causes major wear on hard drives, but I think spinning up once a day for backups is fine and won’t stress it.

            For example, have 3 used enterprise drives in my computer case: 2 in BTRFS RAID1 (mirror) as a data drive and 1 with BTRFS as a backup drive. I use snapshotting to mirror the data drive to the backup drive. I then use restic to copy essential data from the backup drive to a remote cloud location (friend’s house with a 4th smaller hard drive - if I did not have a friend with a hard drive I would use hetzner most likely). My Linux ISO’s don’t go remote, but my photos do.

            Thus I have immediate redundancy (and bit rot protection) from the BTRFS RAID1 data drives, I have a local full backup with the BTRFS backup drive, and I have my essential stuff far away if the computer explodes or something.

  • TomAwezome@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    I’ve got some Western Digital and Samsung HDDs I bought refurbished a while ago in a RAID10 configuration for 4 TB total. Recently setup backups to that and an SSD, the read/write speeds are slow but tolerable and I’m not doing anything that needs high throughput. For backups from multiple machines, it does the job and I make sure to upload the encrypted backups to remote storage once a month so if there’s any catastrophic data loss on my end it’s all recoverable.

  • comrade_twisty@feddit.org
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    13 hours ago

    Never use SMR disks in a RAID/NAS. You’re taking a huge gamble on data loss / disk failures if you do. Also it will be slow as fuck, really fucking slow, often unusable for anything but incredibly slow backups.

    CMR disks are the ONLY way to go.

    • theorangeninja@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 hours ago

      So I could use an external hard drive for backups and only turn it on when needed? But it would be very slow as you said.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    1 minute ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    PCIe Peripheral Component Interconnect Express
    RAID Redundant Array of Independent Disks for mass storage
    RPi Raspberry Pi brand of SBC
    SATA Serial AT Attachment interface for mass storage
    SBC Single-Board Computer
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    7 acronyms in this thread; the most compressed thread commented on today has 12 acronyms.

    [Thread #114 for this comm, first seen 24th Feb 2026, 09:10] [FAQ] [Full list] [Contact] [Source code]

  • MagnificentSteiner@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    12 hours ago

    I have Seagate Barracuda drives in my NAS because I didn’t know about CMR vs SMR before I bought them.

    2 of them are backups, the other spins all the time. The bulk of my storage is video files with infrequent adding of new stuff. The active drive has qBittorrent seeding from it 24/7 so it can be a bit noisy.

    Other than that, you’ll see lower transfer speeds from SMR drives but nothing to worry about if it’s small writes or infrequent copying of large video files. It also takes an age to run a long SMART self test - 18hrs on an 8TB HDD that is 75% full (this’ll get worse as it gets closer to full).

    So SMR drives aren’t ideal but they’ll do the job for a “write once, read many times” style of storage. I wouldn’t buy them at all for a RAID setup. If you can, you’d be better buying refurbished enterprise drives but I have no idea what availability there’ll be where you are.

  • tofu@lemmy.nocturnal.garden
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 hours ago

    I’d go for the second option. Just make sure they are not from the same production run with similar history (operating hours) as it would increase the chance of both failing at the same time.

    You can also check eBay for enterprise HDDs with 90+ remaining SMART values. They are far cheaper than new and usually fine.

    • theorangeninja@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      What do you mean by production run? How could I check that before having them in my hands? And operating hours only applies for used drives no?

      I will do some more research on ebay but it is hard to find offers with SMART values and operating hours. The downside of enterprise harddrives is mostly that they are louder right?

      • tofu@lemmy.nocturnal.garden
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        The downside is they are more expensive, the louder part is for big servers I think, but not HDDs

        And yes, I was talking about used ones, sometimes they’ve got the SMART values listed, sometimes I ask

        • tofu@lemmy.nocturnal.garden
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          And with production run I mean produced at the same charge or the like. As those have a higher chance of failure. If you buy two new from the same shop, odds are that they came in the same shipment etc.