this post was submitted on 02 Jun 2025
452 points (96.3% liked)

Technology

70929 readers
3537 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] thisbenzingring@lemmy.sdf.org 8 points 5 days ago (4 children)

I deal with large data chunks and 40TB drives are an interesting idea.... until you consider one failing

raids and arrays for these large data sets still makes more sense then all the eggs in smaller baskets

[–] remon@ani.social 16 points 5 days ago* (last edited 5 days ago) (3 children)

You'd still put the 40TB drives in a raid? But eventually you'll be limited by the number of bays, so larger size is better.

[–] givesomefucks@lemmy.world 15 points 5 days ago (1 children)

They're also ignoring how many times this conversation has been had...

We never stopped raid at any other increase in drive density, there's no reason to pick this as the time to stop.

[–] jlh@lemmy.jlh.name 4 points 5 days ago (1 children)

Raid 5 is becoming less viable due to the increasing rebuild times, necessitating raid 1 instead. But new drives have better iops too so maybe not as severe as predicted.

[–] RaoulDook@lemmy.world 1 points 5 days ago (2 children)

Yeah I would not touch RAID 5 in this day and age, it's just not safe enough and there's not much of an upside to it when SSDs of large capacity exist. RAID 1 mirror is fast enough with SSDs now, or you could go RAID 10 to amplify speed.

[–] GoatSynagogue@lemmy.world 1 points 4 days ago (1 children)

When setting up RAID1 instead of RAID5 means an extra few thousand dollars of cost, RAID5 is fine thank you very much. Also SSDs in the size many people need are not cheap, and not even a thing at a consumer level.

5x10TB WD Reds here. SSD isn’t an option, neither is RAID1. My ISP is going to hate me for the next few months after I set up backblaze haha

[–] RaoulDook@lemmy.world 1 points 4 days ago (1 children)

But have you had to deal with the rebuild of one of those when a drive fails? It sucks waiting for a really long time wondering if another drive is going to fail causing complete data loss.

[–] GoatSynagogue@lemmy.world 1 points 4 days ago

Not a 10TB one yet, thankfully, but did a 4TB in my old NAS recently after it started giving warnings. It was a few days iirc. Not ideal but better than the thousands of dollars it would cost to go to RAID1. I’d love RAID1, but until we get 50TB consumer drives for < $1k it’s not happening.

[–] jlh@lemmy.jlh.name 2 points 4 days ago

tbf all the big storage clusters use either mirroring or erasure coding these days. For bulk storage, 4+2 or 8+2 erasure coding is pretty fast, but for databases you should always use mirroring to speed up small writes. but yeah for home use, just use LVM or zfs mirrors.

[–] catloaf@lemm.ee 3 points 5 days ago

Of course, because you don't want to lose the data if one of the drives dies. And backing up that much data is painful.

[–] acosmichippo@lemmy.world 1 points 5 days ago* (last edited 5 days ago)

depends on a lot of factors. If you only need ~30TB of storage and two spare RAID disks, 3x 40TB disks will be much more costly than 6x 10TB disks, or even 4x 20TB disks.

[–] grue@lemmy.world 8 points 5 days ago (2 children)

The main issue I see is that the gulf between capacity and transfer speed is now so vast with mechanical drives that restoring the array after drive failure and replacement is unreasonably long. I feel like you'd need at least two parity drives, not just one, because letting the array be in a degraded state for multiple days while waiting for the data to finish copying back over would be an unacceptable risk.

[–] Cenzorrll@lemmy.world 4 points 5 days ago

I upgraded my 7 year old 4tb drives with 14tb drives (both setups raid1). A week later, one of the 14tb drives failed. It was a tense time waiting for a new drive and the 24 hours or so for resilvering. No issues since, but boy was that an experience. I've since added some automated backup processes.

[–] BakedCatboy@lemmy.ml 2 points 5 days ago

Yes this and also scrubs and smart tests. I have 6 14TB spinning drives and a long smart test takes roughly a week, so running 2 at a time takes close to a month to do all 6 and then it all starts over again, so for half to 75% of the time, 2 of my drives are doing smart tests. Then there's scrubs which I do monthly. I would consider larger drives if it didn't mean that my smart/scrub schedule would take more than a month. Rebuilds aren't too bad, and I have double redundancy for extra peace of mind but I also wouldn't want that taking much longer either

[–] floofloof@lemmy.ca 5 points 5 days ago

I guess the idea is you'd still do that, but have more data in each array. It does raise the risk of losing a lot of data, but that can be mitigated by sensible RAID design and backups. And then you save power for the same amount of storage.

[–] Jimmycakes@lemmy.world 0 points 4 days ago

These are literally only sold by the rack to data centers.

What are you going on about?