this post was submitted on 05 Jun 2025
954 points (98.8% liked)
Not The Onion
16546 readers
819 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well, that's the thing: LLMs don't reason - they're basically probability engines for words - so they can't even do the most basic logical checks (such as "you don't advise an addict to take drugs") much less the far more complex and subtle "interpreting of a patient's desires, and motivations so as to guide them through a minefield in their own minds and emotions".
So the problem is twofold and more generic than just in therapy/advice:
So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the "bullet in the chamber" of Russian roulette), plus they can't really do the subtle multi-layered elements of analysis (so the stuff beyond "if A then B" and into the "why A", "what makes a person choose A and can they find a way to avoid B by not chosing A", "what's the point of B" and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.
PS: I find it hard to explain multi-level logic. I supposed we could think of it as "looking at the possible causes, of the causes, of the causes of a certain outcome" and then trying to figure out what can be changed at a higher level to make the last level - "the causes of a certain outcome" - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they'll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say "I need to speak to my brother because yesterday I went out in the rain and got drenched as I don't have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me".