this post was submitted on 04 Jan 2024
177 points (90.4% liked)

Technology

59692 readers
3932 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.

you are viewing a single comment's thread
view the rest of the comments
[–] Maven@lemmy.sdf.org 5 points 10 months ago

the internet is full of ai generated text now, which is poison to training models. But it’s good at pretending.

This misconception shows up again and again. It's wishful thinking from people who want to think AI researchers are idiots and AIs are going to kill themselves.

These models aren't trained on "the internet". They don't just thoughtlessly rip everything that's ever been posted every time they want to make an updated bot. The vast bulk of training data was scraped years ago, predating the current tide of generative muck, and additions are carefully curated to avoid the exact thing you're talking about. A scrape of the 2018 internet is plenty, and will remain so for years and years.