this post was submitted on 18 Jun 2024
92 points (100.0% liked)
TechTakes
1490 readers
30 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Those are examples of actual hallucinations where something did not happen.
Quoting a joke reddit thread as factual is not hallucinating. There was such a thread, but it wasn't factual and an LLM is wrong to present it as factual.
That’s the issue. LLM’s aren’t trustworthy. They hallucinate.
I presume, as the default, that anything a LLM produces is a hallucination right out of the gate.
"Hallucination" implies LLMs can meaningfully perceive. They can't, they're not made that way and they have no reason to be.
We’re arguing language now though, and by definition it isn’t “hallucinating”. By saying that’s what’s happening, you’re unintentionally legitimizing the “AI is making decisions” misinformation.
To get really pedantic, “flashback” would be a better label. It’s not making things up whole cloth, just repeating stuff way out of context.