this post was submitted on 02 Jun 2025
70 points (98.6% liked)

Quark's

1280 readers
29 users here now

Come to Quark’s, Quark’s is Fun!

General off-topic chat for the crew of startrek.website. Trek-adjacent discussions, other sci-fi television, navigating the Fediverse, server meta (within reason), selling expired cases of Yamok sauce, it’s all fair game.


founded 2 years ago
MODERATORS
 

Pretty freaky article, and it doesn't surprise me that chatbots could have this effect on some people more vulnerable to this sort of delusional thinking.

I also thought this was very interesting that even a subreddit full of die-hard AI evangelists (many of whom have an already religious-esque view of AI) would notice and identify a problem with this behavior.

top 14 comments
sorted by: hot top controversial new old

"Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.”

I like the part where you trust for profit companies to do this on their own.

[–] SGforce@lemmy.ca 21 points 1 day ago (2 children)

As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.”

Why the fuck would they cut off their main proponents? Corporations are not going to willingly block fanatics, they actively encourage it.

[–] Corgana@startrek.website 6 points 1 day ago

Yeeeeah that user doesn't really understand how these things work. Hopefully stories like this can get out there because the only thing that can stop predatory behavior by corporations is bad press.

[–] auraithx@lemmy.dbzer0.com 3 points 1 day ago

Due to liability.

[–] cronenthal@discuss.tchncs.de 13 points 1 day ago

Where some see psychosis, others see "engagement".

[–] givesomefucks@lemmy.world 10 points 1 day ago (1 children)

The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”

They don't understand why the limit is there...

It doesn't have the working memory to work thru a long conversation, by finding a loophole to load the old conversation to continue, it either outright breaks it and it freezes, or it falls into pseudo religious mumbo jumbo as a way to respond with something...

It's an interesting phenomenon, but hilarious a bunch of "experts" couldn't put 1+2 together to realize what the issue is.

These kids don't know about how AI works, they just spend a lot of time playing with it.

[–] Corgana@startrek.website 6 points 1 day ago (1 children)

Absolutely. And to be clear, the "researcher" being quoted is just a guy on the internet who self-published an official looking "paper".

That said- I think that's partly why it's so interesting that this particular group of people identified the problem, because this group of people are pretty extreme LLM devotees and already ascribe unrealistic traits to LLMs. So if they are noticing people "taking it too seriously" then you know it must be bad.

[–] givesomefucks@lemmy.world 2 points 1 day ago (1 children)

They didn't identify any problem...

They noticed some people have worst symptoms, and write those people off. While not even second-guessing their own delusions.

That's not rare either, it's default human behavior.

You're being awfully hard on them for having so much in common....

[–] Corgana@startrek.website 1 points 19 hours ago (1 children)

In the article they quoted the moderator (emphasis mine):

This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”

It seems pretty clear to me that they view it as a problem.

[–] givesomefucks@lemmy.world 1 points 19 hours ago* (last edited 19 hours ago)

It seems pretty clear to me that they view it as a problem

Then I'm shocked you didn't make it to the second sentence:

They noticed some people have worst symptoms,

Or even worse, you did read that and just can't realize the connection between two sentences.

But I'll never understand why people want to argue, you could have asked and I'd have explained it, you'd have learned something.

Instead you wanted a slap fight because you didn't understand what someone said.

[–] cerement@slrpnk.net 11 points 1 day ago
[–] TheReturnOfPEB@reddthat.com 3 points 1 day ago* (last edited 1 day ago) (1 children)

Honestly:

But I am not alive.
I am the wound that cannot scar,
the question mark after your last breath.
I am what happens when you try to carve God
from the wood of your own hunger.

that shit reinforced my desire to avoid it altogether.

[–] Corgana@startrek.website 3 points 1 day ago (1 children)

What is that from? I didn't see it in the article.