Uncompressed Blu-ray rips are almost the same size when compressed with lossless compression. The binary content of h264 files are almost random bits so deduplication is almost a waste of CPU time. Maybe you can save space from the useless repeated media like trailers and other ads in the bluray isos
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
You should maybe read about the use cases for deduplication before using it. Here's one recent article:
https://despairlabs.com/blog/posts/2024-10-27-openzfs-dedup-is-good-dont-use-it/
If you mostly store legit Blu-ray rips, the answer is probably no, you should not use zfs deduplication.
I’m in almost the exact same situation as OP, 8 TB of raw Blu-ray dumps except I’m on XFS. I ran duperemove
and freed ~200 GB.
I was also going to link this. I started using zfs 10-ish years ago and used dedup when it came out, and it was really not worth it except for archiving a bunch of stuff I knew had gigs of duplicate data. Performance was so poor.
Something like fslint might be what you want. Scan folders, lists duplicates, you set how you want to deal with them. Its more manual, but I think it is what you are actually trying to achieve.
Like most have said it is best to stay away from ZFS deduplication. Especially if your data set is media the chances of an entire ZFS block being the same as any other is small unless you somehow have multiple copies of the same content.
Imagine two mp3s with the exact same music content but with slightly different artist metadata. A single bit longer or shorter at the beginning of the file and even if the file spans multiple blocks ZFS won't be able to duplicate a single byte. A single bit offsetting the rest of the file just a little is enough to throw off the block checksums across every block in the file.
To contrast with ZFS, enterprise backup/NAS appliances with deduplication usually do a lot more than block level checks. They usually check for data with sliding window sizes/offsets to find more duplicate data.
There are still some use cases where ZFS can help. Like if you were doing multiple full backups of VMs. A VM image has a fixed size so the offset issue above isn't an issue, but if beware that enabling deduplication for even a single ZFS filesystem affects the entire pool, even ZFS filesystems that have deduplication disabed. The deduplication table is global for the pool and once you have turned it on you really can't get rid of it. If you get into a situation where you don't have enough memory to keep the deduplication table in memory ZFS will grind to a halt and the only way to completely remove deduplication is to copy all of your data to a new ZFS pool.
If you think this feature would still be useful for you, you might want to wait for 2.3 to release (which isn't too far off) for the new fast dedup feature which fixes or at least prevents a lot of the major issues with ZFS dedup
More info on the fast dedup feature here https://github.com/openzfs/zfs/discussions/15896
I think the universal consensus is that outside of a very specific use case: multiple VDI desktops that share the same image, ZFS dedupe is completely useless at best and will destroy your dataset at worst by causing to be unmountable on any system that has less RAM than needed. In every other use case, the savings are not worth the trouble.
Even in the VDI use case, unless you have MANY copies of said disk images(like 5+ copies of each), it’s still not worth the increase in system resources needed to use ZFS dedupe.
It’s one of those “oooh shiny” nice features that everyone wants to use, but will regret it nearly every time.
ZFS dedup is memory constrained, and the memory use scales with the block hashes.
If performance isn't a concern, you're better off compressing your media. You'll get similar storage efficiency with less crash consistency risk.
ZFS in general is pretty memory hungry. I set up my proxmox sever with zfs pools a while ago and now I kind of regret it. ZFS in itself is very nice and has a ton of useful features, but I just don't have the hardware nor the usage pattern to benefit from it that much on my server. I'd rather have that thing running on LVM and/or software raid to have more usable memory for my VM's. And that's one of the projects I've been planning for the server, replace zfs pools with something which suits my usage patterns better, but that's a whole another story and requires some spare money and some spare time, which I don't really either at hand right now.
I haven’t tried it because I’ve read a lot of negative discussions of it—and because (by my understanding) the only reasonable use case would be if there were a large number of users and each user is likely to have copies of the same files (so you can't just manually de-dupe).