• Alexstarfire@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    10 hours ago

    Everybody taking shit about Seagate here. Meanwhile I’ve never had a hard drive die on me. Eventually the capacity just became too little to keep around and I got bigger ones.

    Oldest I’m using right now is a decade old, Seagate. Actually, all the HDDs are Seagate. The SSDs are Samsung. Granted, my OS is on an SSD, as well as my most used things, so the HDDs don’t actually get hit all that much.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      8 hours ago

      I’ve had a Samsung SSD die on me, I’ve had many WD drives die on me (also the last drive I’ve had die was a WD drive), I’ve had many Seagate drives die on me.

      Buy enough drives, have them for a long enough time, and they will die.

    • nova_ad_vitum@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      13 hours ago

      My dad had a 286 with a 40MB hard drive in it. When it spun up it sounded like a plane taking off. A few years later he had a 486 and got a 2gb Seagate hard drive. It was an unimaginable amount of space at the time.

      The computer industry in the 90s (and presumably the 80s, I just don’t remember it) we’re wild. Hardware would be completely obsolete every other year.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 hours ago

      Not sure whether we’ll arrive there the tech is definitely entering the taper-out phase of the sigmoid. Capacity might very well still become cheaper, also 3x cheaper, but don’t, in any way, expect them to simultaneously keep up with write performance that ship has long since sailed. The more bits they’re trying to squeeze into a single cell the slower it’s going to get and the price per cell isn’t going to change much, any more, as silicon has hit a price wall, it’s been a while since the newest, smallest node was also the cheapest.

      OTOH how often do you write a terabyte in one go at full tilt.

  • Cornelius_Wangenheim@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    10 hours ago

    Avoid these like the plague. I made the mistake of buying 2 16 TB Exos drives a couple years ago and have had to RMA them 3 times already.

    • SupraMario@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      13 hours ago

      I stopped buying seagates when I had 4 of their 2TB barracuda drives die within 6 months… constantly was RMAing them. Finally got pissed and sold them and bought WD reds, still got 2 of the reds in my Nas Playing hot backups with nearly 8 years of power time.

      • Cornelius_Wangenheim@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        10 hours ago

        They seem to be real hit or miss. I also have 2 6TB barracudas that have 70,000 power on hours (8 yrs) that are still going fine.

        • SupraMario@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Nice, I agree, I’m sure there is an opposite of me, telling their story of a bunch of failed WD drives and having swore them off.

  • dragonlobster@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 hours ago

    These things are unreliable, I had 3 seagate HDDs in a row fail on me. Never had an issue with SSDs and never looked back.

    • vithigar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      Seagate in general are unreliable in my own anecdotal experience. Every Seagate I’ve owned has died in less than five years. I couldn’t give you an estimate on the average failure age of my WD drives because it never happened before they were retired due to obsolescence. It was over a decade regularly though.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      well until you need capacity why not use an SSD. It’s basically mandatory for the operating system drive too

  • TheRealKuni@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    5
    ·
    21 hours ago

    30/32 = 0.938

    That’s less than a single terabyte. I have a microSD card bigger than that!

    ;)

  • corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    ·
    23 hours ago

    I can’t wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.

  • JakenVeina@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    17 hours ago

    The two models, […] each offer a minimum of 3TB per disk

    Huh? The hell is this supposed to mean? Are they talking about the internal platters?

  • JasonDJ@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    22 hours ago

    This is for cold and archival storage right?

    I couldn’t imagine seek times on any disk that large. Or rebuild times…yikes.

    • noobface@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      17 hours ago

      up your block size bro 💪 get them plates stacking 128KB+ a write and watch your throughput gains max out 🏋️ all the ladies will be like🙋‍♀️. Especially if you get those reps sequentially it’s like hitting the juice 💉 for your transfer speeds.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      21 hours ago

      Definitely not for either of those. Can get way better density from magnetic tape.

      They say they got the increased capacity by increasing storage density, so the head shouldn’t have to move much further to read data.

      You’ll get further putting a cache drive in front of your HDD regardless, so it’s vaguely moot.

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      17 hours ago

      Random access times are probably similar to smaller drives but writing the whole drive is going to be slow

    • RedWeasel@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      20 hours ago

      For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 hours ago

      Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    21 hours ago

    I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.

  • veee@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    22 hours ago

    Just one would be a great backup, but I’m not ready to run a server with 30TB drives.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      22 hours ago

      I’m here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.

      This would net around 180TB in that form factor. Thats would go a long way for a long while.