this post was submitted on 13 Sep 2023
63 points (100.0% liked)

Technology

37603 readers
431 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Avram Piltch is the editor in chief of Tom's Hardware, and he's written a thoroughly researched article breaking down the promises and failures of LLM AIs.

you are viewing a single comment's thread
view the rest of the comments
[–] CanadaPlus@lemmy.sdf.org 2 points 1 year ago* (last edited 1 year ago) (1 children)

You know, I think ChatGPT is way ahead of a toaster. Maybe it's more like a small animal of some kind.

[–] nyan@lemmy.cafe 2 points 1 year ago (1 children)

One could equally claim that the toaster was ahead, because it does something useful in the physical world. Hmm. Is a robot dog more alive than a Tamagotchi?

[–] abhibeckert@beehaw.org 1 points 1 year ago* (last edited 1 year ago) (1 children)

There are a lot of subjects where ChatGPT knows more than I do.

Does it know more than someone who has studied that subject their whole life? Of course not. But those people aren't available to talk to me on a whim. ChatGPT is available, and it's really useful. Far more useful than a toaster.

As long as you only use it for things where a mistake won't be a problem - it's a great tool. And you can also use it for "risky" decisions but take the information it gave you to an expert for verification before acting.

[–] nyan@lemmy.cafe 3 points 1 year ago

Sorry to break it to you, but it doesn't "know" anything except what text is most likely to come after the text you just typed. It's an autocomplete. A very sophisticated one, granted, but it has no notion of "fact" and no real understanding of the content of what it's saying.

Saying that it knows what it's spouting back to you is exactly what I warned against up above: anthropomorphization. People did this with ELIZA too, and it's even more dangerous now than it was then.