this post was submitted on 20 May 2025
308 points (97.5% liked)

Selfhosted

46648 readers
848 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jeena@piefed.jeena.net 59 points 1 day ago (10 children)

I wanted to ask where the border of selfhosting is. Do I need to have the storage and computing at home?

Is a cheap VPS on hetzner where I installed python, PieFed and it's Postgres database but also nginx and letsencrpt manually by mydelf and pointed my domain to it, selfhosting?

[–] smiletolerantly@awful.systems 72 points 1 day ago (2 children)

I would say yes, it's still self-hosting. It's probably not "home labbing", but it's still you responsible for all the services you host yourself, it's just the hardware which is managed by someone else.

Also don't let people discourage you from doing bare-metal.

[–] grrgyle@slrpnk.net 3 points 10 hours ago

Interesting distinction. I use a small managed vps, but didn't consider that self-hosting, personally. I do aspire to switch to a homelab and figure out dynamic DNS and all that one day.

[–] stefenauris@pawb.social 12 points 1 day ago

That's actually a good point, self hosting and home lab are similar things but don't necessarily mean the same thing

[–] EncryptKeeper@lemmy.world 13 points 1 day ago

It’s self hosting as long as you are in control of the data you’re hosting.

[–] tripflag@lemmy.world 18 points 1 day ago* (last edited 1 day ago)

It depends who you ask (which we can already tell hehe), but I'd say YES, because you're the one running the show -- you're free to grab all of your bits and pieces at any time, and move to a different provider. That flexibility of not being locked into one specific cloud service (which can suddenly take a bad turn) is what's precious to me.

And on a related note, I also make sure that this applies to my software-stack too -- I'm not running anything that would be annoying to swap out if it turns bad.

[–] Xanza@lemm.ee 11 points 1 day ago

I would say there's no value in assigning such a tight definition on self-hosting--in saying that you must use your own hardware and have it on premise.

I would define selfhost as setting up software/hardware to work for you, when turn-key solutions exist because of one reason or another.

Netflix exists. But we selfhost Jellyfin. Doesn't matter if its not on our hardware or not. What matters is that we're not using Netflix.

[–] Luffy879@lemmy.ml 5 points 1 day ago

Self hosting just means maintaining your own Instance of a web service instead of paying for someone else‘s

As long as you dont pay hetzner for an explicit fully maintained Nextcloud server, it dosent matter if the OS you‘re running it on is a VM or a bare bones server

[–] irmadlad@lemmy.world 5 points 1 day ago

Is a cheap VPS on hetzner where I installed python, PieFed and it’s Postgres database but also nginx and letsencrpt manually by mydelf and pointed my domain to it, selfhosting?

I don't get hung up on the definitions and labels. I run a hybrid of 3 vps and one rack in the closet. I'm totally fine with you thinking that is not selfhosting or homelabbing. LOL I have a ton of fun doing it, and that's the main reason why I do it; to learn and have fun. It's like producing music, or creating bonsai, or any of the other many hobbies I have.

[–] avidamoeba@lemmy.ca 2 points 1 day ago

I'd say you need storage. Once you get storage, use cases start popping up into view over time.

[–] hperrin@lemmy.ca 1 points 1 day ago (1 children)

Your stuff is still in the cloud, so I would say no. It’s better than using the big tech products, but I wouldn’t say it’s fully “self hosted”. Not that that really makes much of a difference. You’re still pretty much in control of everything, so you should be fine.

[–] jeena@piefed.jeena.net 8 points 1 day ago (1 children)

Where is the tipping point though? If I have a server at my parents house, they live in Germany and I in Korea, does my dad host it then because he is paying for the electricity and the access to the internet and makes sure those things work?

[–] hperrin@lemmy.ca 4 points 1 day ago (2 children)

Your parents’ house isn’t the cloud, so yeah, it’s self hosted. The “tipping point” is whether you’re using a hosting provider.

[–] smiletolerantly@awful.systems 10 points 1 day ago (2 children)

They are using a hosting provider - their dad.

"The cloud" is also just a bunch of machines in a basement. Lots of machines in lots of "basements", but still.

[–] wreckedcarzz@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

"hosting provider" in this instance I think means "do you pay them (whoever has the hardware in their possession) a monthly/quarterly/yearly fee"

otherwise you can also say "well ACTUALLY your isp is providing the ability to host on the wan so they are the real hosting provider" and such...

[–] hperrin@lemmy.ca 2 points 1 day ago

Their dad is not a hosting provider. I mean, maybe he is, but that would be really weird.

[–] jeena@piefed.jeena.net 6 points 1 day ago (1 children)

Isn't my dad the hosting provider? I ordered the hardware, he connected it to his switch and his electricity and pressed the button to start it the first time. From there on I logged in to his VPN and set up the server like I would at Hetzner.

But you're right it doesn't really make a difference. I feel the only difference it makes for me where I post my questions on Lemmy if it is in a !selfhosting community or a !linux community.

From a feeling perspective, even if I use Hetzners cloud, I feel I self host my single user PieFed instance (and matrix, my other websites, mastodon, etc.) because I have to preform basically the same steps as for things I'm really hosting at home like open-webui, immich, peertube.

[–] hperrin@lemmy.ca 0 points 1 day ago

A hosting provider is a business. If your dad is a business and you are buying hosting services from him, then yes, he is a hosting provider and you are not self hosting. But that’s not what you’re doing. You’re hosting on your own hardware on your family’s internet. That’s self hosting.

When you host on Hetzner, you’re hosting on their hardware using their internet. That’s not self hosting. It’s similar, cause like you said, you have to do a lot of the same administration work, but it’s not self hosting.

Where it gets a little murky is rack space providers. Then you’re hosting on your own hardware, but it’s not your own internet, and there’s staff there to help you… kinda iffy whether you’re self hosting, but I’d say yeah, since you own the hardware.

[–] ifmu@lemmy.world 0 points 1 day ago

Personally, I’d say no. At that point you are administering it, not hosting it yourself.

[–] possiblylinux127@lemmy.zip -1 points 1 day ago* (last edited 1 day ago) (2 children)

Why wouldn't you just use Docker or Podman

Manually installing stuff is actually harder in a lot of cases

[–] smiletolerantly@awful.systems 11 points 1 day ago (3 children)

Yeah why wouldn't you want to know how things work!

I obviously don't know you, but to me it seems that a majority of Docker users know how to spin up a container, but have zero knowledge of how to fix issues within their containers, or to create their own for their custom needs.

[–] FlexibleToast@lemmy.world 9 points 1 day ago (1 children)

That's half the point of the container... You let an expert set it up so you don't have to know it on that level. You can manage fast more containers this way.

[–] smiletolerantly@awful.systems 7 points 1 day ago (3 children)

OK, but I'd rather be the expert.

And I have no troubling spinning up new services, fast. Currently sitting at around ~30 Internet-facing services, 0 docker containers, and reproducing those installs from scratch + restoring backups would be a single command plus waiting 5 minutes.

[–] wreckedcarzz@lemmy.world 6 points 1 day ago (1 children)

I'd rather be the expert

Fair, but others, unless they are getting paid for it, just want their shit to work. Same as people who take their cars to a mechanic instead of wrenching on it themselves, or calling a handyman when stuff breaks at home. There's nothing wrong with that.

[–] FlexibleToast@lemmy.world 3 points 1 day ago (1 children)

I literally get paid to do this type of work and there is no way for me to be an expert in all the services that our platform runs. Again, that's kind of the point. Let the person who writes the container be the expert. I'll provide the platform, the maintenance, upgrades, etc.. the developer can provide the expertise in their app.

[–] notfromhere@lemmy.ml 2 points 1 day ago (2 children)

A lot of times it is necessary to build the container oneself, e.g., to fix a bug, satisfy a security requirement, or because the container as-built just isn’t compatible with the environment. So in that case would you contract an expert to rebuild it, host it on a VM, look for a different solution, or something else?

[–] FlexibleToast@lemmy.world 1 points 1 day ago

Containerfiles are super easy to write. For the most part if you can do it in a VM, you can do it in a container. This sort of thing is why you would move to containers. Instead of being the "expert" in all the apps you run, you can focus on the things that actually need your attention.

[–] WhyJiffie@sh.itjust.works 2 points 1 day ago (1 children)

It's not like it's so hard to rebuild a container for the occasional services that needs it. but it's still much better than needing to do it with every single service

[–] notfromhere@lemmy.ml 1 points 1 day ago (1 children)

It depends on the container I suppose. There are some that are very difficult to rebuild depending on what’s in it and what it does. Some very complex software can be ran in containers.

[–] FlexibleToast@lemmy.world 2 points 1 day ago

Yep, some people sort of miss the point of microservices and make some fairly monolithic containers. Or they're legacy apps being shoehorned into a container. Some things still require handholding. FreeIPA is a good example. They have a container version, but it's just a monolithic install in a container and only recommended for testing.

[–] notfromhere@lemmy.ml 1 points 1 day ago* (last edited 1 day ago) (1 children)

reproducing those installs from scratch + restoring backups would be a single command plus waiting 5 minutes.

Is that with Ansible or your own tooling or something else?

[–] smiletolerantly@awful.systems 2 points 1 day ago (1 children)

NixOS :)

Maybe I should have clarified that liking bare-metal does not imply disliking abstraction

[–] notfromhere@lemmy.ml 1 points 1 day ago (1 children)

I’ve been wanting to tinker with NixOS. I’ve stuck in the stone ages automating VM deployments on my Proxmox cluster using ansible. One line and about 30 minutes (cuda install is a beast) to build a reproducible VM running llama.cpp with llama-swap.

[–] smiletolerantly@awful.systems 2 points 20 hours ago

Nice. My partner has a Proxmox setup, so we've adapted the Nix config to spin up new VMs of any machine with a single command.

[–] walden@sub.wetshaving.social 3 points 1 day ago (1 children)

I use apps on my phone, but have no clue how to troubleshoot them. I have programs on my computer that I hardly know how to use, let alone know the inner workings of. How is running things in Docker any different? Why put down people who have an interest in running things themselves?

I know you're just trying to answer the above question of "why do it the hard way", but it struck me as a little condescending. Sorry if I'm reading too much into it!

[–] smiletolerantly@awful.systems 7 points 1 day ago (1 children)

No, I actually think that is a good analogy. If you just want to have something up and running and use it, that's obviously totally fine and valid, and a good use-case of Docker.

What I take issue with is the attitude which the person I replied to exhibits, the "why would anyone not use docker".

I find that to be a very weird reaction to people doing bare metal. But also I am biased. ~30 Internet facing services, 0 docker in use 😄

[–] MXX53@programming.dev 2 points 1 day ago

This is interesting to me. I run all of my services, custom and otherwise, in docker. For my day job, I am the sole maintainer of all of our docker environment and I build and deploy internal applications to custom docker containers and maintain all of the network routing and server architecture. After years of hosting on bare metal, I don’t know if I could go back to the occasional dependency hell that is hosting a ton of apps at the same time. It is just too nice not having to think about what version of X software I am on and to make sure there isn’t incompatibility. Just managing a CI/CD workflow on bare metal makes me shudder.

Not to say that either way is wrong, if it works it works imo. But, it is just a viewpoint that counters my own biases.

[–] possiblylinux127@lemmy.zip 2 points 1 day ago* (last edited 1 day ago)

You can customize or build custom containers with a Dockerfile

Also, I want to know how containers work. That's way more useful.

[–] jeena@piefed.jeena.net 7 points 1 day ago (1 children)

I did that first but that always required much more resources than doing it yourself because every docker starts it's own database and it's own nginx/apache server in addition to the software itself.

Now I have just one Postgresql database instance running with many users and databases on it. Also just one Nginx which does all the virtual host stuff in one central place. And both the things which I install with apt and manually are set up similarly.

I use one docker setup for firefox-sync but only because doing it manually is not documented and even the docker way I had to research for quite some time.

[–] FlexibleToast@lemmy.world 2 points 1 day ago (1 children)

What? No it doesn't... You could still have just one postgresql database if you wanted just one. It is a big antithetical to microservices, but there is no reason you can do it.

[–] jeena@piefed.jeena.net 5 points 1 day ago (4 children)

But then you can't just use the containers provided by the service developers and have to figure out how to redo their container which in the end is more work than just run it manually.

[–] WhyJiffie@sh.itjust.works 2 points 1 day ago* (last edited 1 day ago)

I have very rarely ran into such issues. can you give an example of something that works like that? it sounds to be very half-assed by the developer. only pihole comes to mind right now (except for the db part, because I think it uses sqlite)

edit: I now see your examples

[–] jeena@piefed.jeena.net 2 points 1 day ago (2 children)
[–] WhyJiffie@sh.itjust.works 1 points 1 day ago* (last edited 1 day ago)

all of these run the database in a separate container, not inside the app container. the latter would be hard to fix, but the first is just that way to make documentation easier, to be able to give you a single compose file that is also functional in itself. none of them use their own builds of the database server (though lemmy with its postgres variant may be a bit of an outlier), so they are relatively easy to configure for an existing db server.

all I do in cases like this is look up the database initialization command (in the docker compose file), run that in my primary postgres container, create a new docker network and attach it to the postgres stack and the new app's stack (stack: the container composition defindd by the docker compose file). and then I tell the app container, usually through envvars or command line parameters embedded in the compose file, that the database server is at xy hostname, and docker's internal DNS server will know that for xy hostname it should return the IP address of the container that is named xy, through the appropriate docker network. and also the user and pass for connection. from then, from the app's point of view, my database server in that other container is just like a dedicated physical postgres machine you put on the network with its own cable going to a switch.

unless very special circumstances, where the app needs a custom build of postgres, they can share a single instance just fine. but in that case you would have to run 2 postgres instances even without docker, or migrate to the modified postgres, which is an option with docker too.

[–] FlexibleToast@lemmy.world 2 points 1 day ago

Well, yes that's best practice. That doesn't mean you have to do it that way.

[–] notfromhere@lemmy.ml 1 points 1 day ago

Typically, the container image maintainer will provide environment variables which can override the database connection. This isn’t always the case but usually it’s as simple as updating those and ensuring network access between your containers.

[–] FlexibleToast@lemmy.world 1 points 1 day ago

You absolutely can. It's not like the developers of postgresql maintain a version of postgresql that only allows one db. You can connect to that db and add however many things you want to it.