this post was submitted on 23 Mar 2025
767 points (97.8% liked)

Technology

68066 readers
4237 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.

According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as "a convicted criminal who murdered two of his children and attempted to murder his third son," a Noyb press release said.

you are viewing a single comment's thread
view the rest of the comments
[–] OpenPassageways@lemmy.zip 58 points 1 week ago (3 children)

To me it's clear that these tools are primarily useful as bullshit generators, and I expect them to hallucinate and be inaccurate. But the companies trying to capitalize on the "AI" bubble are saying that these tools can be useful and accurate. I imagine OpenAI is going to have to invoke the Fox News defense in this case, and claim that "no reasonable person would take this seriously".

[–] ElPussyKangaroo@lemmy.world 1 points 3 days ago

I feel like the primary use of these tools is only grammar and writing assistance. Everything else is just plugging in extra tools to make it more useful... although the way Perplexity does it is considerably more useful than the rest.

[–] oxysis@lemm.ee 33 points 1 week ago (1 children)

Don’t use hallucinate to describe what it is doing, that is humanizing it and making the tech seem more advanced than it is. It is randomly mashing words together without understanding the meaning of any of them

[–] FaceDeer@fedia.io 10 points 1 week ago (1 children)
[–] ech@lemm.ee 7 points 1 week ago (2 children)

The technical term was created to promote the misunderstanding that LLMs "think". The "experts" want people to think LLMs are far more advanced than they actually are. You can add as many tokens to your context as you want - every model is still, fundamentally, a text generator. Humanizing it more than that is naive or deceptive, depending on how much money you have riding on the bubble.

[–] FaceDeer@fedia.io 9 points 1 week ago

You didn't read the article I linked. The term came into use before LLMs were a thing, it was originally used in relation to image processing.

[–] oxysis@lemm.ee 3 points 1 week ago