The main reason many sub-communities are stuck on Telegram (and Discord) are the public group chat/broadcast channel related features. Signal still has a 1000 member group size limit, which is more than enough for a "group DM" but mostly useless for groups with publicly posted invite links. Those same groups would also much rather have functional scrollback/search on join instead of encryption.
chameleon
Gonna add a dissenting "maybe but not really". YT is really aggressive on this kinda stuff lately and the situation is changing month by month. YT has multiple ways of flagging your IP as potentially problematic and as soon as you get flagged you're going to end up having to run quite an annoying mess of scripts that may or may not last in the long term. There's some instructions in a stickied issue on the Invidious repo.
You can't pretend an open port is closed, because an open port is really just a service that's listening. You can't pretend-close it and still have that service work. The only thing you can do is firewalling off the entire service, but presumably, any competent distro will firewall off all services by default and any service listening publicly is doing so for a good reason.
I guess it comes down to whether they feel like it's worth obfuscating port scan data. If you deploy that across all of your network then you make things just a little bit more annoying for attackers. It's a tiny bit of obfuscation that doesn't really matter, but I guess plenty of security teams need every win they can get, as management is always demanding that you do more even after you've done everything that's actually useful.
Looking at the slides in the original Japanese source, this tooling also has a whole lot of analysis options and can pull/push game data/positioning both to and from a real Switch or something along those lines. Integrating that much custom features into an off-the-shelf tool would probably take just as long.
Did a physical-to-virtual-to-physical conversion to upgrade and unbreak a webserver that had been messed up by simultaneously installing packages from Debian and Ubuntu.
It's a problem in the Secure Boot chain, every system is affected by any vulnerability in any past, present or future bootloader that that system currently trusts. Even if it's an OS you aren't using, an attacker could "just" install that vulnerable bootloader.
That said, MS had also been patching their own CVE-2023-24932 / CVE-2024-38058, and disabled the fix for that in this update due to widespread issues with it. I don't think anyone knows what they're doing anymore.
bcrypt has a maximum password length of 56 to 72 bytes and while it's not today's preferred algo for new stuff, it's still completely fine and widely used.
My dotfiles aren't distro-specific because they're symlinks into a git repo (or tarball) + a homegrown shell script to make them, and that's about the end of it.
My NixOS configuration is split between must-have CLI tools/nice-to-have CLI tools/hardware-related CLI tools/GUI tools and functions as a suitable reference for non-Nix distros, even having a few comments on what the package names are elsewhere, but installation is ultimately still manual.
It's absolutely not the case that nobody was thinking about computer power use. The Energy Star program had been around for around 15 years at that point and even had an EU-US agreement, and that was sitting alongside the EU's own energy program. Getting an 80Plus-certified power supply was already common advice to anyone custom-building a PC which was by far the primary group of users doing Bitcoin mining before it had any kind of mainstream attention. And the original Bitcoin PDF includes the phrase "In our case, it is CPU time and electricity that is expended.", despite not going in-depth (it doesn't go in-depth on anything).
The late 00s weren't the late 90s where the most common OS in use did not support CPU idle without third party tooling hacking it in.
Eh, no. "I'm going to make things annoying for you until you give up" is literally something already happening, Titanfall and the like suffered from it hugely. "I'm going to steal your stuff and sell it" is a tale old as time, warez CDs used to be commonplace; it's generally avoided by giving people a way to buy your thing and giving people that bought the thing a way to access it. The situation where a third party profits off your game is more likely to happen if you don't release server binaries! For example, the WoW private/emulator server scene had a huge problem with people hoarding scripts, backend systems and bugfixes, which is one of the reasons hosted servers could get away with fairly extreme P2W.
And he seems to completely misunderstand what happens to IP when a studio shuts down. Whether it's bankruptcy or a planned closure, it will get sold off just like a laptop owned by the company would and the new owner of the rights can enforce on it if they think it's useful. Orphan works/"abandonware" can happen, just like they can to non-GaaS games and movies, but that's a horrible failing on part of the company.
In my experience, most hangs with a message about amdgpu loading on screen are caused by an amdgpu issue of some kind. I'd check to see if amdgpu ends up being loaded correctly via
lsmod | grep amdgpu
and just a generaljournalctl -b 0 | grep amdgpu
to see if there's any obvious failures there. Chances are that even if it's not amdgpu, the real failure is in the journal somewhere.Could be a wrong setting of
hardware.enableRedistributableFirmware
(should be true) or the new-ishhardware.amdgpu.initrd.enable
(can be either really but either true or false might be more or less reliable on your system).