I run 2 pihole containers on my k8s cluster. They serve up DNS to the rest of my network. This is extremely easy as I can just use helm to launch the pihole containers into two different namespaces using 2 different site specific files. Then I use teleport to keep them in sync when I change something, which is seldom. I run 2 because DNS is important and I like automated patching / reboots. This requires I have redundant services.
king_hreidmar
I just run a pg_dump through kubectl exec and pipe the stdout to a file on my master node. The same script then runs restic to send encrypted backups over to s3. I use the host name flag on the restic command as kind of a hack to get backups per service name. This eliminates the risk of overwriting files or directories with the same name.
Gitea, Flux, Pi-hole (HA), Joplin sync, all the Postgres to support those, synapse server, and vault warden. I have a Postgres for each but use longhorn so have 3x replication. If one node dies postgres just spins up on another host and grabs the longhorn volume. Longhorn is running atop one usb drive for each pod. All nodes are raspberry Pi’s. If I wanted to I could run HA postgres but I can live with a few min downtime on anything but DNS which is HA.
I disagree. You can deploy nearly anything from docker hub or some other container registry in k8s with little to no trouble. Can you give some examples?
K8s can allow you to build a reliable and mostly self sufficient suite of tools for your home lab. There is a lot of upfront cost to get there. However, I’d argue k8s isn’t actually all that more complex than running individual docker containers. In both cases you need to have an understanding of networking, containers, proxies, databases, and declarative config of some form or another. K8s just provides primitives that make it really easy to build more complex container projects up declaratively. It doesn’t mean it has to be complex. I run 5 or 6 different services with individual backing Postgres DBs. I source the containers from docker hub just like you would in docker. Certbot will auto deploy certs for any service I set up this way. HA proxy will auto add domains and upstreams for them too. When I want to setup a new service I often just copy and paste an existing service manifest and do a find and replace with a new service name. At that point I can usually just apply the manifest and wait 5 min. My service will be up, available on the internet, and already have SSL certs.
I’ll add, if you have really complex projects with tons of micro services you can deploy a helm chart for that in two commands. Even with minimal or no knowledge about how it should be setup.
If you use helm charts this is really easy!! The one I use from mojo exposes this in the helm chart / config.