this post was submitted on 19 Nov 2023
2 points (100.0% liked)

Self-Hosted Main

515 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

Hi everyone!

Now that I got my home server up, running and accessible (partially at least) to the outside world the backing up of it became a concern for me. Firstly because I spent quite some time with the docker compose files by carefully assigning the ports and volumes to a fitting way and also inside each service configuring them to work the best in my current setup. Secondly because I am starting to aggregate some data (vaultwarden passwords, inventory tracking, user statistics, etc) that I pretty much want to keep if something happens to my main server.

Currently my file structure in the server is a directory for (more or less) each service inside a Server/ directory in my home directory. Inside each service directory is a compose.yml and the volumes for the container. In some cases, there are some volumes that I don't need/want to back up (like jellyfin libraries or torrent downloads).
I have a secondary notebook that I can use as a NAS for now to keep the data.

Which method would be the best for this configurations? Or would you suggest a different file structure?

The best solution for me would be one that I could run on docker with a Web UI, but am I also comfortable with a CLI, so this is not a requirement

top 2 comments
sorted by: hot top controversial new old
[โ€“] knaak@alien.top 1 points 11 months ago

I use GIT for my docker compose files and I have it setup as its own project, with top level subdirectory being the server that the docker container runs on. For me, its "docker-external" and "docker-internal" where i partition between containers I expose via Cloudflare and those that I do not.

Then on each server, I clone the repo into my home directory and then I create a symbolic link which is always called "docker" but links to the directory for each server.

That lets me manage my compose files nicely and push/pull to git.

I have gitea running on one of the containers with all of my repositories, including the docker-compose ones.

Then I have a VM that I run my scheduled jobs on, and I attached an external disk to that VM. Every day, I pull my gitea repos onto my external drive. Then I push from that into AWS CodeCommit.

That gives me automated backups for my code internally on each server and gitea, then internally on my external hdd, and finally externally in AWS which fits my 3-2-1 backup policy.

Then for my mounted volumes, i run Syncthing on each of my docker containers and then on that VM with the external disk. Then I have a bi-weekly job that syncs it to my NAS, then my NAS goes to Backblaze each week.

[โ€“] Minituff@alien.top 1 points 11 months ago

I'm a little biased because I built it, but this is what I use.

I create backups of my Docker compose files and each container volume (these are done when the containers are stopped so no data is corrupted). Then I take those folders and send them encrypted to Backblaze using Kopia. Since it's just config files I am able to get away with the Free tier.

I also have my Docker compose files backed up in a private GitHub repo.