dr_robot

joined 1 year ago
[–] dr_robot@kbin.social 22 points 7 months ago (3 children)

It does not seem like you heard the arguments presented in the article. It isn't about being offended by any left or right wing politics, but because women engineers and scientists were uncomfortable about it for a variety of reasons. In a field which struggles to attract and keep female talent, this is a pretty big thing. The model herself spoke out and asked to be "retired from tech".

[–] dr_robot@kbin.social 2 points 8 months ago

I'm working on a music collection manager with a TUI for myself. I prefer to buy and own music instead of just streaming and I have a selhosted server with ZFS and backups where I keep the music and from which I can stream or download to my devices. There are websites which help you keep track of what you own and have wishlists, but they don't really satisfy my needs so I decided to create my own. Its main feature is to have an easier overview of what albums I own and don't own for the artists I'm interested in and to maintain a wishlist based on this for my next purchases. I'm doing it in Rust, because it's a hobby project and I want to get better at Rust. However, it has paid off in other ways. The type system has allowed me to create a UI that is very safe to add features to without worrying about crashes. Sometimes I actually have to think why it didn't crash only to find that Rust forced me to correctly handle an optional outcome before even getting to an undefined situation.

 

To build a fully climate-neutral transport system in the Netherlands, many citizens will have to give up their cars, Jan Willem Eirsman, the government’s new chief climate adviser as chairman of the Scientific Climate Council, told the AD.

[–] dr_robot@kbin.social 6 points 1 year ago (1 children)

Wireguard easily supports dual stack configuration on a single interface, but the VPN server must also have IPv6 enabled. I use AirVPN and I get both IPv6 and IPv4 with a single wireguard tunnel. In addition to the ::/0 route you also need a static IPv6 address for the wireguard interface. This address must be provided to you by ProtonVPN.

If that's not possible, the only solution is to entirely disable IPv6.

[–] dr_robot@kbin.social 4 points 1 year ago* (last edited 1 year ago)

The Netherlands recently passed a law requiring helmets on mopeds. This makes mopeds less interesting leading to fat bikes being more attractive, because a helmet is not required on a fat bike.

[–] dr_robot@kbin.social 2 points 1 year ago* (last edited 1 year ago) (1 children)

Correct. And getting the right configuration is pretty easy. Debian has good defaults. The only changes I make are configuring it to send emails to me when updates are installed. These emails will also then tell you if you need to reboot in subject line which is very convenient. As I said I also blacklist kernel updates on the server that uses ZFS as recompiling the modules causes inconsistencies between kernel and user space until a reboot. If you set up emails, you will also know when these updates are ready to be installed because you'll be notified that they're being held van.

So yea, I strongly recommend unattended-upgrades with email configured.

Edit: you can also make it reboot itself if you want to. Might be worth it on devices that don't run anything very important and that can handle downtime.

[–] dr_robot@kbin.social 4 points 1 year ago (3 children)

A few simple rules make it quite simple for me:

  • Firstly, I do not run anything critical myself. I cannot guarantee that I will have time to resolve issues as they come up. Therefore, I tolerate a moderate risk of a borked update.
  • All servers run the same be OS. Therefore, I don't have to resolve different issues for different machines. There is then the risk that one update will take them all out, but see my first point.
  • That OS is stable, in my case Debian so updates are rare and generally safe to apply without much thought.
  • Run as little as possible on bare metal and avoid third party repos or downloading individual binaries unless absolutely necessary. Complex services should run in containers and update by updating the container image.
  • Run unattended-upgrades on all of them. I deploy the configuration via Ansible. Since they all run the same OS, I only need to figure out the right configuration once and then it's just a matter of using Ansible to deploy it everywhere. I do blacklist kernel updates on my main server, because it has ZFS through DKMS on it so it's too risky to blindly apply.
  • Have postfix set up so that unattended-upgrades can email me when a reboot is required. I reboot only when I know I'll have some time to fix anything that breaks. For the blacklisted packages I will get an email that they've been held back so I know that I need to update manually.

This has been working great for me for the past several months.

For containers, I rely on Podman auto update and systemd. Actually my own script that imitates its behavior because I had issues with Podman pulling images which were not new, but which nevertheless triggered restarts of the containers. However, I lock the major version number manually and check and update major versions manually. Major version updates stung me too much in the past when I'd update them after a long break.

[–] dr_robot@kbin.social 5 points 1 year ago (1 children)

I expose my services to the web via my own VPS proxy :) I simply run only very few of them, use 2FA when supported, keep them up to date, run each service as rootless podman, and have a very verbose logcheck set up in case the container environment gets compromised, and allow only ports 80 and 443, and, very importantly, truly sensitive data (documents and such) is encrypted at rest so that even if my services are compromised that data remains secure.

For ssh, I have set up a separate raspberry pi as a wireguard server into my home network. Therefore, for any ssh management I first connect via this wireguard connection.

[–] dr_robot@kbin.social 21 points 1 year ago

Most open source vpn protocols, afaik, do not obfuscate what they are, because they're not designed to work in the presence of a hostile operator. They only encrypt the user data. That is, they will carry information in their header that they are such and such vpn protocol, but the data payload will be encrypted.

You can open up wireshark and see for yourself. Wireshark can very easily recognize and even filter wireguard packets regardless of port number. I've used it to debug my firewall setups.

In the past when I needed a VPN in such a situation, I had to resort to a paid option where the VPN provider had their own protocol which did try to obfuscate the nature of the protocol.

[–] dr_robot@kbin.social 1 points 1 year ago (1 children)

Thanks for this useful reply! I think I'll just need to closely examine my setup and figure out if I really need the ability to up/down interfaces like I described or whether the more persistent approach of networkd is actually more suitable for me. Sometimes I just want to reproduce behaviour that I've used before, but may not actually need.

[–] dr_robot@kbin.social 1 points 1 year ago (3 children)

Thanks for your reply! One thing I'm struggling with networkd is hysteresis. That is, toggling the interface down and then back up does not do what I expect it to. That is, setting the interface down does not clear up the configuration, and setting the interface up does not reconfigure the interface. I have to run reconfigure for that. I was hoping that the declarative approach of networkd would make it easy to predict interface state and configuration.

This does make sense because configuration is not the same as operational state. However, what would the equivalent of ifdown (set interface down and remove configuration) and ifup (set interface up and reconfigure) be using networkd and networkctl? This kind of feature would be useful for me to test config changes, debug networking issues, disconnect part of the network while I'm making some changes, etc.

 

Does anybody have experience with both systems enough to compare them?

I'm currently using ifupdown on my Debian server as that's the default, but it seems that the modern way of managing the local network is via systemd-networkd so I'm contemplating putting the effort in to migrate.

Would those of you who have experience with it, recommend it?

In my short investigation, I have made the following observations:

  • using networkd means you can use networkctl to manually control the interfaces which is quite convenient
  • networkd aims to be fully declarative
  • networkd separates the creation of virtual interfaces (netdev files) from their configuration (network files)
  • networkd doesn't support all networking features (e.g. namespaces)
  • networkd is systemd, but surprisingly I can't find information on how to create other unit files that depend on the individual network files going up or down, other than networkd-dispatcher. I don't like dispatcher because just like ifupdown it triggers all the scripts and you need if tests to exclude all interfaces you don't need to be affected. I'd like to write unit files that can be targeted to activate and deactivate when a particular interface goes up or down.
  • networkd, other than via dispatcher, does not seem to support adding arbitrary commands to run like ifupdown supports via e.g. pre-down, post-up, etc.
[–] dr_robot@kbin.social 1 points 1 year ago

Thanks a lot for these tips! Especially about using the upstream deb.

[–] dr_robot@kbin.social 4 points 1 year ago

I subscribed. I use navidrome since it has a slick UI and supports the subsonic API. Having both in one is great.

 

Note: It seems my original post from last week didn't get posted on lemmy.world from kbin (I can't seem to find it) so I'm reposting it. Apologies to those who may have already seen this.

I'm looking to deploy some form of monitoring across my selhosted servers and I'm a bit confused about the different options.

I have a small network of three machines that I would like to monitor. I am not looking for a solution that lets me monitor tens, hundreds, or thousands of nodes. Furthermore, I am more interested in being able to observe metrics for each node individually rather than in aggregate. Each of these machines performs a different task so aggregate metrics from these machines are not particularly meaningful. However, collecting all the metrics centrally so that I can have a single dashboard to view them all in one convenient place is definitely something I would like.

With that said, I have been trying to understand the different (popular) options that are available and I would like to hear what the community's experience is with these options and if anybody has any advice on any of these in light of my requirements above.

Prometheus seems like the default go-to for monitoring. This would require deploying a node_exporter on each node, a prometheus service, and a grafana dashboard. That's all fine, I can do that. However, from all that I'm reading it doesn't seem like Prometheus is optimised for my use case of monitoring each node individually. I'm sure it's possible, but I'm concerned that because this is not what it's meant for, it would take me ages to set it up such that I'm happy with it.

Netdata seems like a comprehensive single-device monitoring solution. It also appears that it is possible to run your own registry to help with distributed monitoring. Not gonna lie, the netdata dashboard looks slick. An important additional advantage is that it comes packaged on Debian (all my machines run Debian). However, it looks like it does not store the metrics for very long. To solve that I could also set up InfluxDB and Grafana for long-term metrics. I could use Prometheus instead of InfluxDB in this arrangement, but I'm more likely to deploy a bunch of IoT devices than I am to deploy servers needing monitoring which means InfluxDB is a bit more future-proof for me as it could be reused for IoT data.

Cockpit is another single-device solution which additionally provides direct control of the system. The direct control is probably not so much of a plus as then I would never let Cockpit be accessible from outside my home network whereas I wouldn't mind that so much for dashboards with read-only data (still behind some authentication of course). It's also probably not built for monitoring specifically, but I included this in the list in case somebody has something interesting to say about it.

What's everybody's experience with the above solutions and does anybody have advice specific to my situation? I'm currently leaning to netdata with my own registry at first and later add InfluxDB and Grafana for long-term metrics.

view more: next ›