Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 6 points 2 weeks ago

Thanks, I'm happy to know Imaginary puppies are still real, no wait, not real ;). (The BBB is cool, wasn't aware of it, I don't keep up sadly. "Thus BBB is even more uncomputable than BB." always like that kind of stuff, like the different classes of infinity).

[–] Soyweiser@awful.systems 6 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

Im reminded again of the fascinating bit of theoretical cs (long ago prob way outdated now) which wrote about theoretical of classes of Turing machines which could solve the halting problem for a class lower than it, but not its own class. This is also where I got my oracle halting problem solver from.

So this machine can only solve the halting problems for other utms which use 99 dalmatian puppies or less. (Wait would a fraction of a puppy count? Are puppies Real or Natural? This breaks down if the puppies are Imaginary).

[–] Soyweiser@awful.systems 9 points 2 weeks ago (4 children)

Quis custodiet ipsos custodes?

[–] Soyweiser@awful.systems 7 points 2 weeks ago* (last edited 2 weeks ago)

Pretty good news tbh. That means that the power demand is driven by users, and we can influence it a little bit, and not just by repeatedly training new models over and over because somebody left a new comment somewhere. https://www.youtube.com/watch?v=XKQJXJOVGE4

[–] Soyweiser@awful.systems 10 points 2 weeks ago (8 children)

Bonus this also solves the halting problem

[–] Soyweiser@awful.systems 6 points 2 weeks ago (3 children)

Revealing just how forever online I am, but due to talking about the 'I like to watch' pornographic 9/11 fan music video from the Church of Euthanasia (I'm one of the two people who remembers this it seems) I discovered that the main woman behind this is now into AI-Doom. On the side of the paperclips. General content warnings all around (suicide, general bad taste etc), Chris was banned from a big festival (lowlands) in The Netherlands over the 9/11 video, after she was already booked (we are such a weird exclave of the USA, why book her, and then get rid of her over a 9/11 video in 2002?). Here is one of her conversations with chatgpt about the Churches anti-humanist manifesto. linked here not because I read it but just to show how AI is the idea that eats everything and I was amused by this weird blast from the past I think nobody recalls but now also into AGI.

[–] Soyweiser@awful.systems 5 points 2 weeks ago

Yeah indeed, had not even thought of the timegap. And it is such a bit of bullshit misdirection, very Muskian, to pretend that this fake transparency in any way solves the problem. We don't know what the bad prompt was nor who did it, and as shown here, this fake transparency prevents nothing. Really wished more journalists/commentators were not just free pr.

[–] Soyweiser@awful.systems 3 points 2 weeks ago

Im reminded of the cartoon bullets from who framed rodger rabbit.

[–] Soyweiser@awful.systems 10 points 2 weeks ago

LLMs cannot fail, they can only be prompted incorrectly. (To be clear, as I know there will be people who think this is good, I mean this in a derogatory way)

[–] Soyweiser@awful.systems 7 points 2 weeks ago* (last edited 2 weeks ago)

Think this already happened, not this specific bit, but ai involved shooting. Esp considering we know a lot of black people have been falsely arrested due to facial ID already. And with the gestapofication of the USA that will just get worse. (Esp when the police go : no regulations on AI also gives us carte blance. No need for extra steps).

[–] Soyweiser@awful.systems 8 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Remember those comments with links in them bots leave on dead websites? Imagine instead of links it sets up an AI to think of certain specific behaviour or people as immoral.

Swatting via distributed hit piece.

Or if you manage to figure out that people are using a LLM to do input sanitization/log reading, you could now figure out a way to get an instruction in the logs and trigger alarms this way. (E: im reminded of the story from the before times, where somebody piped logging to a bash terminal and got shelled because somebody send a bash exploit which was logged).

Or just send an instruction which changes the way it tries to communicate, and have the LLM call not the cops but a number controlled by hackers which pays out to them, like the stories of the A2P sms fraud which Musk claimed was a problem on twitter.

Sure competent security engineering can prevent a lot of these attacks but you know points to history of computers.

Imagine if this system was implemented for Grok when it was doing the 'everything is white genocide' thing.

Via Davidgerard on bsky: https://arstechnica.com/security/2025/05/researchers-cause-gitlab-ai-developer-assistant-to-turn-safe-code-malicious/ lol lmao

[–] Soyweiser@awful.systems 8 points 2 weeks ago (1 children)

"whats my purpose?"

view more: ‹ prev next ›