Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.
What they call hallucinations in other areas was called fabulations, to invent tales or stories.
I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.
This applies equally well to human-generated answers to stuff.
True, the difference is that with humans it's usually more public, it is easier for someone to call bullshit. With LLMs the bullshit is served with the intimacy of embarrassing porn so is less likely to see any warnings.
Sound similar to betteridges law of headlines.
Im sure there are tricks like adding 'fact check your response' but I suspect there is something intrinsic to these models that makes it a super difficult problem.
I get the feeling that LLMs are designed to please humans, so uncomfortable answers like “I don’t know” are out of the question.
Not designed, but trained. Training involves rewarding finding answers, so they WILL give you something. "I don't know" is not going to fare well in the training development, so it naturally gets filtered out, while very creative (but wrong) LLMs do well.
This is mostly just a matter of proper prompting.