• dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    6
    ·
    edit-2
    25 days ago

    No shit. All they have to do is finally grow the balls to build SSD’s in the same form factor as the 3.5" drives everyone in enterprise is already using, and stuff those to the gills with flash chips.

    “But that will cannibalize our artificially price inflated/capacity restricted M.2 sales if consumers get their hands on them!!!”

    Yep, it sure will. I’ll take ten, please.

    Something like that could easily fill the oodles of existing bays that are currently filled with mechanical drives, both in the home user/small scale enthusiast side and existing rackmount stuff. But that’d be too easy.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      25 days ago

      Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it’s driven primarily by the cost of the NAND chips, and you’d just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there’d be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.

      Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,

      The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I’ve seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn’t have takers.

    • Hozerkiller@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      25 days ago

      I hope youre not putting m.2 drives in a server if you plan on reading the data from them at some point. Those are for consumers and there’s an entirely different formfactor for enterprise storage using nvme drives.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        24 days ago

        Enterprise systems do have m.2, though admittedly its only really used as pretty disposable boot volumes.

        Though they aren’t used as data volumes so much, it’s not due to unreliability, it’s due to hot swap and power levels.

  • Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    22
    ·
    25 days ago

    I’ll shed no tears, even as a NAS owner, once we get equivalent capacity SSD without ruining the bank :P

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    25 days ago

    Haven’t they said that about magnetic tape as well?

    Some 30 years ago?

    Isn’t magnetic tape still around? Isn’t even IBM one of the major vendors?

    • n2burns@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      25 days ago

      Anyone who has said that doesn’t know what they’re talking about. Magnetic tape is unparalleled for long-term/archival storage.

      This is completely different. For active storage, solid-state has been much better than spinning rust for a long time, it’s just been drastically more expensive. What’s being argued here is that it’s not performant and while it might be more expensive initially, it’s less expensive to run and maintain.

        • thedeadwalking4242@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          25 days ago

          Hard drives have longer shelf life than unpowered SSD. HDD are a good middle ground between SSD speeds, tape drive stability, and price they won’t go anywhere. The data world exists in tiers

          • enumerator4829@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            25 days ago

            The flaw with hard drives comes with large pools. The recovery speed is simply too slow when a drive fails, unless you build huge pools. So you need additional drives for more parity.

            I don’t know who cares about shelf life. Drives spin all their lives, which is 5-10 years. Use M-Disk or something if you want shelf life.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          25 days ago

          Right up until an EMP wipes out all our data. I still maintain that we should be storing all our data on vinyl, doing it physically is the only guarantee.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      24 days ago

      The disk cost is about a 3 fold difference, rather than order of magnitude now.

      These disks didn’t make up as much of the costs of these solutions as you’d think, so a disk based solution with similar capacity might be more like 40% cheaper rather than 90% cheaper.

      The market for pure capacity play storage is well served by spinning platters, for now. But there’s little reason to iterate on your storage subsystem design, the same design you had in 2018 can keep up with modern platters. Compared to SSD where form factor has evolved and the interface indicates revision for every pcie generation.

    • Nomecks@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      25 days ago

      Spinning platter capacity can’t keep up with SSDs. HDDs are just starting to break the 30TB mark and SSDs are shipping 50+. The cost delta per TB is closing fast. You can also have always on compression and dedupe in most cases with flash, so you get better utilization.

    • Natanael@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      25 days ago

      It’s losing cost advantages as time goes. Long term storage is still on tape (and that’s actively developed too!), and flash is getting cheaper, and spinning disks have inherent bandwidth and latency limits. It’s probably not going away entirely, but it’s main usecases are being squeezed on both ends

  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    25 days ago

    Hdds were a fad, I’m waiting for the return of tape drives. 500TB on a $20 cartridge and I can live with the 2 minute seek time.

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    25 days ago

    I’m about to build a home server with a lot of storage (relatively, around 6 or 8 times 12 TB as a ballpark), and I didn’t even consider anything other than spinning drives so far.

  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    25 days ago

    I generally agree, it won’t take long for SSDs to be cheap enough to justify the expense. HDD is in a way similar to CD/DVD, it had it’s time, it even lasted much longer than expected, but eventually technology became cheaper and the slightly cheaper price didn’t make sense any more.

    SSD wins on all account for live systems, and long term cold storage goes to tapes. Not a lot of reasons to keep them around.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      25 days ago

      Nvme is terrible value for storage density. There is no reason to use it except when you need the speed and low latency.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        25 days ago

        There’s a cost associated with making that determination and managing the storage tiering. When the NVME is only 3x more expensive per amount of data compared to HDD at scale, and “enough” storage for OS volume at the chepaest end where you can either have a good enough HDD or a good enough SDD at the same price, then OS volume just makes sense to be SSD.

        In terms of “but 3x is pretty big gap”, that’s true and does drive storage subsystems, but as the saying has long been, disks are cheap, storage is expensive. So managing HDD/SDD is generally more expensive than the disk cost difference anyway.

        BTW, NVME vs. non-NVME isn’t the thing, it’s NAND v. platter. You could have an NVME interfaced platters and it would be about the same as SAS interfaced platters or even SATA interfaced. NVME carried a price premium for a while mainly because of marketing stuff rather than technical costs. Nowadays NVME isn’t too expensive. One could make an argument that number of PCIe lanes from the system seems expensive, but PCIe switches aren’t really more expensive than SAS controllers, and CPUs have just so many innate PCIe lanes now.

  • NeuronautML@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    25 days ago

    I doubt it. SSDs are subject to quantuum tunneling. This means if you don’t power up an SSD once in 2-5 years, your data is gone. HDDs have no such qualms. So long as they still spin, there’s your data and when they no longer do, you still have the heads inside.

    So you have a use case that SSDs will never replace, cold data storage. I use them for my cold offsite back ups.

    • MonkderVierte@lemmy.ml
      cake
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      25 days ago

      You’re wrong. HDD need about as much frequently powering up as SSD, because the magnetization gets weaker.

      • NeuronautML@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        25 days ago

        Here’s a copy paste from superuser that will hopefully show you that what you said is incorrect in a way i find expresses my thoughts exactly

        Magnetic Field Breakdown

        Most sources state that permanent magnets lose their magnetic field strength at a rate of 1% per year. Assuming this is valid, after ~69 years, we can assume that half of the sectors in a hard drive would be corrupted (since they all lost half of their strength by this time). Obviously, this is quite a long time, but this risk is easily mitigated - simply re-write the data to the drive. How frequently you need to do this depends on the following two issues (I also go over this in my conclusion).

        https://superuser.com/questions/284427/how-much-time-until-an-unused-hard-drive-loses-its-data

    • n2burns@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      25 days ago

      Nothing in this article is talking about cold storage. And if we are talking about cold storage, as others gave pointed out, HHDs are also not a great solution. LTO (magnetic tape) is the industry standard for a good reason!

      • NeuronautML@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        25 days ago

        Tape storage is the gold standard but it’s just not realistically applicable to low scale operations or personal data storage usage. Proper long term storage HDDs do exist and are perfectly adequate to the job as i specified above and i can attest this from personal experience.