this post was submitted on 20 May 2025
222 points (97.0% liked)

Technology

70173 readers
3496 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mbtrhcs@feddit.org 1 points 9 hours ago

I don't know if it's just my age/experience or some kind of innate "horse sense" But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth

I'm not sure how you would do that if you are asking about something you don't have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.

Perhaps with a reasonable prompt an LLM can be more honest about when it's hallucinating?

So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM's actual internal state.