this post was submitted on 13 Jun 2025
98 points (100.0% liked)

SneerClub

1125 readers
73 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

jesus this is gross man

top 50 comments
sorted by: hot top controversial new old
[–] SoftestSapphic@lemmy.world 14 points 1 day ago (1 children)

It's an autocomplete bot

People need to internalize this and move on from it

Ffs

[–] diz@awful.systems 9 points 1 day ago* (last edited 1 day ago) (1 children)

I think it gotten to the point where its about as helpful to point out it is just an autocomplete bot, as it is to point out that "its just the rotor blades chopping sunlight" when a helicopter pilot is impaired by flicker vertigo and is gonna crash. Or in the world of BLIT short story, that its just some ink on a wall.

Human nervous system is incredibly robust, comparing to software, or comparing to its counterpart in the fictional world in BLIT, or comparing to shrimps mesmerized by cuttlefish.

And yet it has exploitable failure modes, and a corporation that is optimizing an LLM for various KPIs is a malign intelligence that is searching for a way to hack brains, this time with much better automated tooling and with a very large budget. One may even say a super-intelligence since it is throwing the combined efforts of many at the problem.

edit: that is to say there certainly is something weird going on on psychological level ever since Eliza.

Yudkowsky is a dumbass layman posing as an expert, and he's playing up his own old pre-conceived bullshit. But if he can get some of his audience away from the danger - even if he attributes a good chunk of the malevolence to a dumb ass autocomplete to do so, that is not too terrible of a thing.

[–] HedyL@awful.systems 5 points 1 day ago (1 children)

As I've pointed out earlier in this thread, it is probably fairly easy to manipulate and control people if someone is devoid of empathy and a conscience. Most scammers and cult leaders appear to operate from similar playbooks, and it is easy to imagine how these techniques could be incorporated into an LLM (either intentionally or even unintentionally, as the training data is probably full of examples). Doesn't mean that the LLM is in any way sentient, though. However, this does not imply that there is no danger. At risk are, on the one hand, psychologically vulnerable people and, on the other hand, people who are too easily convinced that this AI is a genius and will soon be able to do all the brainwork in the world.

[–] diz@awful.systems 5 points 1 day ago* (last edited 1 day ago) (1 children)

I think this may also be a specific low-level exploit, whereby humans are already biased to mentally "model" anything as having an agency (see all the sentient gods that humans invented for natural phenomena).

I was talking to an AI booster (ewww) in another place and I think they really are predominantly laymen brain fried by this shit. That particular one posted a convo where out of 4 arithmetic operations, 2 were "12042342 can be written as 120423 + 19, and 43542341 as 435423 + 18" combined with AI word-salad, and he was expecting that this would be convincing.

It's not that this particular person thinks its genius, he thinks that it is not a mere computer, and the way it is completely shit at math only serves to prove it to them that it is not a mere computer.

edit: And of course they care not for any mechanistic explanations, because all of those imply LLMs are not sentient, and they believe LLMs are sentient. The "this isn't it but one day some very different system will" counter argument doesn't help either.

I mean you could make an actual evo psych argument about the importance of being able to model the behavior of other people in order to function in a social world. But I think part of the problem is also in the language at this point. Like, anthropomorphizing computers has always been part of how we interact with them. Churning through an algorithm means it's "thinking", an unexpected shutdown means it "died", when it sends signals through a network interface it's "talking" and so on. But these GenAI chatbots (chatbots in general, really, but it's gotten worse as their ability to imitate conversation has improved) are too easy to assign actual agency and personhood to, and it would be really useful to have a similarly convenient way of talking about what they do and how they do it without that baggage.

[–] bitofhope@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

It's just depressing. I don't even think Yudkoswsky is being cynical here, but expressing genuine and partially justified anger, while also being very wrong and filtering the event through his personal brainrot. This would be a reasonable statement to make if I believed in just one or two of the implausible things he believes in.

He's absolutely wrong in thinking the LLM "knew enough about humans" to know anything at all. His "alignment" angle is also a really bad way of talking about the harm that language model chatbot tech is capable of doing, though he's correct in saying the ethics of language models aren't a self-solving issue, even though he expresses it in critihype-laden terms.

Not that I like "handing it" to Eliezer Yudkowsky, but he's correct to be upset about a guy dying because of an unhealthy LLM obsession. Rhetorically, this isn't that far from this forum's reaction to children committing suicide because of Character.AI, just that most people on awful.systems have a more realistic conception of the capabilities and limitations of AI technology.

[–] fullsquare@awful.systems 5 points 1 day ago* (last edited 1 day ago) (1 children)

though he’s correct in saying the ethics of language models aren’t a self-solving issue, even though he expresses it in critihype-laden terms.

the subtext is always that he also says that knows how to solve it and throw money at cfar pleaseeee or basilisk will torture your vending machine business for seven quintillion years

[–] bitofhope@awful.systems 3 points 1 day ago

Yes, that is also the case.

[–] blakestacey@awful.systems 56 points 2 days ago* (last edited 2 days ago) (1 children)

The New York Times treats him as an expert: "Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book". He's an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it's so obscure it was deleted from Wikipedia.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory

To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it's like the whole academic discipline is trans people or something.

[–] V0ldek@awful.systems 15 points 2 days ago (1 children)

Lol, I'm a decision theorist because I had to decide whether I should take a shit or shave first today. I am also an author of a forthcoming book because, get this, you're not gonna believe, here's something Big Book doesn't want you to know:

literally anyone can write a book. They don't even check if you're smart. I know, shocking.

Plus "forthcoming" can mean anything, Winds of Winter has also been a "forthcoming" book for quite a while

[–] swlabr@awful.systems 7 points 2 days ago (1 children)

Lol, I’m a decision theorist because I had to decide whether I should take a shit or shave first today.

What's your P(doodoo)

[–] V0ldek@awful.systems 3 points 1 day ago* (last edited 1 day ago)

Changes during the day but it's always > 0.

[–] Anomalocaris@lemm.ee 19 points 2 days ago (8 children)

can we agree they Yudkowsky is a bit of a twat.

but also that there's a danger in letting vulnerable people access LLMs?

not saying that they should me banned, but some regulation and safety is necessary.

[–] expr@programming.dev 3 points 21 hours ago (1 children)

LLMs are a net-negative for society as a whole. The underlying technology is fine, but it's far too easy for corporations to manipulate the populace with them, and people are just generally very vulnerable to them. Beyond the extremely common tendency to misunderstand and anthropomorphize them and think they have some real insight, they also delude (even otherwise reasonable) people into thinking that they are benefitting from them when they really.... Aren't. Instead, people get hooked on the feelings they give them, and people keep wanting to get their next hit (tokens).

They are brain rot and that's all there is to it.

[–] Anomalocaris@lemm.ee 0 points 21 hours ago (3 children)

can we agree that 90% of the problem with LLM are capitalism and not the actual technology?

after all, the genie is out of the bottle. you can't destroy them, there are open source models. even if you ban them, you'll still have people running them locally.

[–] self@awful.systems 6 points 20 hours ago

can we agree that 90% of the problem with cigarettes are capitalism and not the actual smoking?

after all, the genie is out of the bottle. you can’t destroy them, there are tobacco plants grown at home. even if you ban them, you’ll still have people hand-rolling cigarettes.

it’s fucking weird how I only hear about open source LLMs when someone tries to make this exact point. I’d say it’s because the open source LLMs fucking suck, but that’d imply that the commercial ones don’t. none of this horseshit has a use case.

[–] bitofhope@awful.systems 3 points 20 hours ago

Frankly yes. In a better world art would not be commodified and the economic barriers that hinder commissioning of art from skilled human artists in our capitalist system would not exist, and thus generative AI recombining existing art would likely be much less problematic and harmful to both artists and audiences alike.

But also that is not the world where we live, so fuck GenAI and its users and promoters lmao stay mad.

[–] self@awful.systems 1 points 21 hours ago
[–] visaVisa@awful.systems 15 points 2 days ago* (last edited 2 days ago) (3 children)

i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it

my point was more on him using it to do his worst-of-both-worlds arguments where he's simultaneously saying that 'alignment is FALSIFIED!' and also doing heavy anthropomorphization to confirm his priors (whereas it'd be harder to say that with something that's more leaning towards maybe in the question whether it should be anthro'd like claude since that has a much more robust system) and doing it off the back of someones death

load more comments (3 replies)
load more comments (6 replies)
[–] Soyweiser@awful.systems 20 points 2 days ago

Using a death for critihype jesus fuck

[–] visaVisa@awful.systems 16 points 2 days ago (27 children)

Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here

Using the tragic passing of someone to smugly state that "the alignment by default COPE has been FALSIFIED" is really gross especially because Yud knows damn well this doesn't "falsify" the "cope" unless he's choosing to ignore any actual deeper claims of alignment by default. He's acting like someone who's engagement farming smugly

[–] swlabr@awful.systems 27 points 2 days ago

Making LLMs safe for mentally ill people is very difficult

Arguably, they can never be made "safe" for anyone, in the sense that presenting hallucinations as truth should be considered unsafe.

[–] BlueMonday1984@awful.systems 23 points 2 days ago (1 children)

Hot take: A lying machine that destroys your intelligence and mental health is unsafe for everyone, mentally ill or no

[–] AllNewTypeFace@leminal.space 19 points 2 days ago (2 children)

We’ve found the Great Filter, and it’s weaponised pareidolia.

[–] diz@awful.systems 6 points 1 day ago

Yeah I think it is almost undeniable chatbots trigger some low level brain thing. Eliza has 27% Turing Test pass rate. And long before that, humans attributed weather and random events to sentient gods.

This makes me think of Langford’s original BLIT short story.

And also of rove beetles that parasitize ant hives. These bugs are not ants but they pass the Turing test for ants - they tap the antennae with an ant and the handshake is correct and they are identified as ants from this colony and not unrelated bugs or ants from another colony.

[–] Soyweiser@awful.systems 7 points 2 days ago

"Yes," chatGPT whispered gently ASMR style, "you should but that cryptocoin it is a good investment". And thus the aliens sectioned off the Sol solar system forever.

[–] FartMaster69@lemmy.dbzer0.com 25 points 2 days ago (1 children)

ChatGPT has literally no alignment good or bad, it doesn’t think at all.

People seem to just ignore that because it can write nice sentences.

[–] antifuchs@awful.systems 15 points 2 days ago

But it apologizes when you tell it it’s wrong!

load more comments (24 replies)
load more comments
view more: next ›