this post was submitted on 01 Aug 2024
2229 points (98.9% liked)

Technology

66231 readers
4831 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] oyo@lemm.ee 160 points 7 months ago (3 children)

LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

[–] pumpkinseedoil@sh.itjust.works 41 points 7 months ago (2 children)

Often the answers are pretty good. But you never know if you got a good answer or a bad answer.

[–] Blackmist@feddit.uk 55 points 7 months ago (1 children)

And the system doesn't know either.

For me this is the major issue. A human is capable of saying "I don't know". LLMs don't seem able to.

[–] xantoxis@lemmy.world 35 points 7 months ago (1 children)

Accurate.

No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There's no concept of not knowing the answer, because they don't know anything in the first place.

[–] Blackmist@feddit.uk 18 points 7 months ago (1 children)

The worst for me was a fairly simple programming question. The class it used didn't exist.

"You are correct, that class was removed in OLD version. Try this updated code instead."

Gave another made up class name.

Repeated with a newer version number.

It knows what answers smell like, and the same with excuses. Unfortunately there's no way of knowing whether it's actually bullshit until you take a whiff of it yourself.

[–] nilloc@discuss.tchncs.de 5 points 7 months ago

So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

From what I’ve seen you’ll need an iron stomach.

[–] treadful@lemmy.zip 13 points 7 months ago (1 children)

They really aren't. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It's good at getting broad strokes but the details are very often wrong.

Now imagine someone that doesn't have your expertise reading that answer. They won't recognize those details are wrong until it's too late.

[–] Quereller@lemmy.one 6 points 7 months ago

That is about the experience I have. I asked it for factual information in the field I work at. It didn't gave correct answers. Or, it gave working protocols which were strange and would not be successful.

[–] markon@lemmy.world 3 points 7 months ago

Sounds familiar. Citation please