this post was submitted on 08 Aug 2023
227 points (96.0% liked)

Selfhosted

39276 readers
161 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I can't say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which... is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What's the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs....

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

you are viewing a single comment's thread
view the rest of the comments
[–] Millie@lemm.ee 9 points 1 year ago (2 children)

I dream of this kind of storage. I just added a second m.2 with a couple of TB on it and the space is lovely but I can already see I'll fill it sooner than I'd like.

[–] xtremeownage@lemmyonline.com 6 points 1 year ago (1 children)

I will say, it's nice not having to nickel and dime my storage.

But, the way I have things configured, redundancy takes up a huge chunk of the overall storage.

I have around 10x 1T NVMe and SATA SSDs in a ceph cluster. 60% storage overhead there.

Four of those 8T disks are in a ZFS Striped Mirror / Raid 10. 50% storage overhead.

The 4x 970 evo / evo plus drives are also in a striped mirror ZFS pool. 50% overhead.

But, still PLENTY of usable storage, and- highly available at that!

[–] krolden@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

Any reason you went with a striped mirror instead of raidz5/6?

[–] xtremeownage@lemmyonline.com 3 points 1 year ago

The two ZFS pools are only 4 devices. One pool is spinning rust, the other is all NVMe.

I don't use raid 5 for large disks, and instead go for raid6/z2. Given z2 and striped mirrors both have 50% overhead with only 4 disks- striped mirrors has the advantage of being much faster, double the IOPs, and faster rebuilds. For these particular pools, performance was more important than overall disk space.

However, before all of these disks were moved from TrueNAS to Unraid- there was a 8x8T Z2 pool, which worked exceptionally well.

Cripes I was stoked I managed to upgrade from 4x 2tb to 4x 4tb recently.