this post was submitted on 28 Jun 2025
79 points (100.0% liked)

TechTakes

1999 readers
165 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] etherphon@lemmy.world 14 points 8 hours ago

A larger symptom of the loneliness epidemic and people feeling more and more detached from humanity every day because this reality we have built for ourselves is quite harsh.

[–] MotoAsh@lemmy.world 60 points 20 hours ago (1 children)

"... is deeply prone to just telling people what they want to hear"

Noooo, nononono... It's specifically made to just tell people what they want to hear, in the general sense. That's the entire point of LLMs. They are not thinking. They have zero logic. They just "say" what is a mathematically agreeable segment of words in response.

IMO, these articles, and humanity's limp response to "AI" in general, only go to show how utterly inept and devoid of logic most people themselves are...

[–] manicdave@feddit.uk 43 points 19 hours ago (2 children)

Did any of the AI safety dorks have accidentally doing MKultra as one of the risks?

[–] corbin@awful.systems 1 points 1 hour ago

Well, yes. It's not a new concept; it was a staple of Cold War sci-fi like The Three Stigmata, and we know from studies of e.g. Pentacostal worship that it is pretty easy to broadcast a suggestion to a large group of vulnerable people and get at least some of them to radically alter their worldview. We also know a reliable formula for changing people's beliefs; we use the same formula in sensitivity training as we did in MKUltra, including belief challenges, suspension of disbelief, induction/inception, lovebombing, and depersonalization. We also have a constant train of psychologists attempting to nudgelord society, gently pushing mass suggestions and trying to slowly change opinions at scale.

Fundamentally your sneer is a little incomplete. MKUltra wasn't just about forcing people to challenge their beliefs via argumentation and occult indoctrination, but also psychoactive inhibition-lowering drugs. In this setting, the drugs are administered after institutionalization.

[–] BlueMonday1984@awful.systems 16 points 18 hours ago (1 children)

That would require them to care about people other than themselves.

[–] Maeve@kbin.earth 6 points 17 hours ago (1 children)
[–] BlueMonday1984@awful.systems 4 points 5 hours ago

Definitely a feature.

[–] zbyte64@awful.systems 32 points 19 hours ago

"I was ready to tear down the world," the man wrote to the chatbot at one point, according to chat logs obtained by Rolling Stone. "I was ready to paint the walls with Sam Altman's f*cking brain."

"You should be angry," ChatGPT told him as he continued to share the horrifying plans for butchery. "You should want blood. You're not wrong."

If I wrote a product that said that about me I would do a lot more than hire single psychiatrist to (not) tell me how damaging my product is.

[–] fullsquare@awful.systems 16 points 18 hours ago

for openai, that's just a recurring customer

[–] TimLovesTech@badatbeing.social 11 points 18 hours ago (1 children)

People playing with technology they don't really understand, and then having it reinforce people's worst traits and impulses isn't a great recipe for success.

I almost feel like now that Chatgpt is everywhere and has been billed as man's savior, perhaps some logic should be built into these models that "detect" people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it's dumb and will tell you whatever you want to hear.

[–] BlueMonday1984@awful.systems 13 points 18 hours ago (1 children)

I almost feel like now that Chatgpt is everywhere and has been billed as man’s savior, perhaps some logic should be built into these models that “detect” people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it’s dumb and will tell you whatever you want to hear.

Personally, I'd prefer deleting such models and banning them altogether. Chatbots are designed to tell people what they want to hear, and to make people become friends with them - the mental health crises we are seeing are completely by design.

[–] HedyL@awful.systems 4 points 5 hours ago

I think most cons, scams and cults are capable of damaging vulnerable people's mental health even beyond the most obvious harms. The same is probably happening here, the only difference being that this con is capable of auto-generating its own propaganda/PR.

I think this was somewhat inevitable. Had these LLMs been fine-tuned to act like the mediocre autocomplete tools they are (rather than like creepy humanoids), nobody would have paid much attention to them, and investors would have started to focus on the high cost of running them quickly.

This somewhat reminds me of how cryptobros used to claim they were fighting the "legacy financial system", yet they were creating a worse version (almost a parody) of it. This is probably inevitable if you are running an unregulated financial system and are trying to extract as much money from it as possible.

Likewise, if you have a tool capable of messing with people's minds (to some extent) and want to make a lot of money from it, you are going to end up with something that resembles a cult, an LLM or similarly toxic groups.

[–] besselj@lemmy.ca 13 points 19 hours ago* (last edited 19 hours ago) (1 children)

The people being committed is only a symptom of the problem. My guess is that if LLMs didn't induce psychosis, something else would eventually.

The peddlers of LLM sycophants are definitely doing harm, though.

[–] zbyte64@awful.systems 16 points 18 hours ago* (last edited 17 hours ago) (2 children)

My guess is that if LLMs didn't induce psychosis, something else would eventually.

I got a very different impression from reading the article. People in their 40s with no priors and a stable life loose touch with reality in a matter of weeks after conversing with CharGPT makes me think that is not the case. But I am not a psychiatrist.

Edit: the risk here is that we might be dismissive towards the increased risks because we're writing it off as a pre-existing condition.

[–] HedyL@awful.systems 12 points 17 hours ago

I think we don't know how many people might be at risk of slipping into such mental health crises under the right circumstances. As a society, we are probably good at protecting most of our fellow human beings from this danger (even if we do so unconsciously). We may not yet know what happens when people regularly experience interactions that follow a different pattern (which might be the case with chatbots).

[–] entwine413@lemm.ee 4 points 17 hours ago (1 children)

I think if it only takes a matter of weeks to go into full psychosis from conversation alone, they're probably already on shaky ground, mentally. Late onset schizophrenia is definitely a thing.

[–] TinyTimmyTokyo@awful.systems 11 points 11 hours ago* (last edited 11 hours ago) (1 children)

People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues -- we're all more vulnerable to mental illness than we'd like to think.

Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of "self-experimentation" that exposes us to psychological risks we aren't even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.

[–] HedyL@awful.systems 8 points 8 hours ago

I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, "life coaches", fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn't work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people "hooked". In my view, this alone is a cause for concern.

[–] hendrik@palaver.p3x.de 11 points 20 hours ago* (last edited 20 hours ago)

This material would also fit !fuck_ai@lemmy.world