this post was submitted on 13 Nov 2023
229 points (96.4% liked)

Technology

58150 readers
3851 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Brunbrun6766@lemmy.world 48 points 10 months ago (5 children)

What do you do when ChatGPT just makes shit up or answers incorrectly to yes or no questions, you'd have no way of knowing it was wrong

[–] gridleaf@lemmy.world 15 points 10 months ago (1 children)

ChatGPT is most useful when you may not know the right answer, but you know a wrong answer when you see one. It's very useful for technical issues. Much quicker for troubleshooting than searching page after page for a solution.

[–] TheDarkKnight@lemmy.world 2 points 10 months ago

It’s actually great at troubleshooting linux stuff weirdly enough lol

[–] Evotech@lemmy.world 1 points 10 months ago* (last edited 10 months ago)

Bing AI provides reference in the "more precise" version

[–] IronKrill@lemmy.ca 1 points 10 months ago (1 children)

While this is an important thing to understand about AI, it's an overstated issue once understood. For most questions I ask AI, it doesn't matter if it's correct as long as it pulls some half useful info to get me on track (i.e programming). For other questions, I only ask it if I need to figure out where to look next, which it will usually do just fine.

The first page of my search results is all AI generated garbage articles anyway, at least I know what I am getting with GPT and can take it as such.

[–] Womble@lemmy.world 2 points 10 months ago

Yup, as long as you are aware that it could be wrong and look at it critically LLMs at GPT scale are very useful tools. The best way I've heard it described is having a lightning fast intern who often gets things wrong but will always give it a go.

So long as you're calibrated to "how might this be wrong" when looking at the results it is exceptionally useful.

[–] otter@lemmy.ca 1 points 10 months ago

Not the other commenter:

I usually have an idea about the thing I'm asking, and if not then I'll look up the topics mentioned after some guided brainstorming

I've also found that asking the same question again, after resetting the chat, can give you an idea of what is happening