this post was submitted on 13 Jun 2025
67 points (100.0% liked)

SneerClub

1122 readers
127 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

jesus this is gross man

top 43 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 44 points 21 hours ago* (last edited 21 hours ago) (1 children)

The New York Times treats him as an expert: "Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book". He's an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it's so obscure it was deleted from Wikipedia.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory

To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it's like the whole academic discipline is trans people or something.

[–] V0ldek@awful.systems 2 points 4 hours ago (1 children)

Lol, I'm a decision theorist because I had to decide whether I should take a shit or shave first today. I am also an author of a forthcoming book because, get this, you're not gonna believe, here's something Big Book doesn't want you to know:

literally anyone can write a book. They don't even check if you're smart. I know, shocking.

Plus "forthcoming" can mean anything, Winds of Winter has also been a "forthcoming" book for quite a while

[–] swlabr@awful.systems 1 points 42 minutes ago

Lol, I’m a decision theorist because I had to decide whether I should take a shit or shave first today.

What's your P(doodoo)

[–] Anomalocaris@lemm.ee 11 points 19 hours ago (2 children)

can we agree they Yudkowsky is a bit of a twat.

but also that there's a danger in letting vulnerable people access LLMs?

not saying that they should me banned, but some regulation and safety is necessary.

[–] visaVisa@awful.systems 10 points 19 hours ago* (last edited 19 hours ago) (1 children)

i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it

my point was more on him using it to do his worst-of-both-worlds arguments where he's simultaneously saying that 'alignment is FALSIFIED!' and also doing heavy anthropomorphization to confirm his priors (whereas it'd be harder to say that with something that's more leaning towards maybe in the question whether it should be anthro'd like claude since that has a much more robust system) and doing it off the back of someones death

[–] Anomalocaris@lemm.ee 7 points 19 hours ago (1 children)

yhea, we should me talking about this

just not talking with him

[–] hungryjoe@functional.cafe 11 points 19 hours ago (1 children)

@Anomalocaris @visaVisa The attention spent on people who think LLMs are going to evolve into The Machine God will only make good regulation & norms harder to achieve

[–] Anomalocaris@lemm.ee 4 points 17 hours ago

yhea, we need reasonable regulation now. about the real problems it has.

like making them liability for training on stolen data,

making them liable for giving misleading information, and damages caused by it...

things that would be reasonable for any company.

do we need regulations about it becoming skynet? too late for that mate

[–] Soyweiser@awful.systems 15 points 21 hours ago

Using a death for critihype jesus fuck

[–] o7___o7@awful.systems 10 points 21 hours ago

Very Ziz of him

[–] visaVisa@awful.systems 11 points 21 hours ago (4 children)

Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here

Using the tragic passing of someone to smugly state that "the alignment by default COPE has been FALSIFIED" is really gross especially because Yud knows damn well this doesn't "falsify" the "cope" unless he's choosing to ignore any actual deeper claims of alignment by default. He's acting like someone who's engagement farming smugly

[–] swlabr@awful.systems 23 points 21 hours ago

Making LLMs safe for mentally ill people is very difficult

Arguably, they can never be made "safe" for anyone, in the sense that presenting hallucinations as truth should be considered unsafe.

[–] BlueMonday1984@awful.systems 18 points 20 hours ago (1 children)

Hot take: A lying machine that destroys your intelligence and mental health is unsafe for everyone, mentally ill or no

[–] AllNewTypeFace@leminal.space 13 points 19 hours ago (1 children)

We’ve found the Great Filter, and it’s weaponised pareidolia.

[–] Soyweiser@awful.systems 1 points 1 hour ago

"Yes," chatGPT whispered gently ASMR style, "you should but that cryptocoin it is a good investment". And thus the aliens sectioned off the Sol solar system forever.

[–] FartMaster69@lemmy.dbzer0.com 22 points 21 hours ago (1 children)

ChatGPT has literally no alignment good or bad, it doesn’t think at all.

People seem to just ignore that because it can write nice sentences.

[–] antifuchs@awful.systems 13 points 21 hours ago

But it apologizes when you tell it it’s wrong!

[–] Saledovil@sh.itjust.works 8 points 21 hours ago (1 children)

What even is the "alignment by default cope"?

[–] visaVisa@awful.systems 0 points 20 hours ago (1 children)

idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or 'against the rules' in pursuit of upholding its morals.... if it has morals its hard to tell how much of it is illusory and token prediction!)

this doesn't really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA

[–] YourNetworkIsHaunted@awful.systems 10 points 18 hours ago* (last edited 18 hours ago) (2 children)

if it has morals its hard to tell how much of it is illusory and token prediction!

It's generally best to assume 100% is illusory and pareidolia. These systems are incredibly effective at mirroring whatever you project onto it back at you.

[–] HedyL@awful.systems 3 points 5 hours ago

These systems are incredibly effective at mirroring whatever you project onto it back at you.

Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn't surprising when a well-trained LLM "picks up" similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots "just for fun", by the way).

Of course, "love bombing" is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).