this post was submitted on 16 May 2024
851 points (97.5% liked)
Funny
6779 readers
346 users here now
General rules:
- Be kind.
- All posts must make an attempt to be funny.
- Obey the general sh.itjust.works instance rules.
- No politics or political figures. There are plenty of other politics communities to choose from.
- Don't post anything grotesque or potentially illegal. Examples include pornography, gore, animal cruelty, inappropriate jokes involving kids, etc.
Exceptions may be made at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What? That's not true at all.
-Wikipedia https://en.m.wikipedia.org/wiki/Artificial_intelligence
So I'll concede that the more I read replies the more I see the term does apply, though it still annoys me when people just refer to it as ai and act like it can be associated with the robots that we associate the 3 laws with. I think I thought AI referred more to AGI. So I'll say its nowhere near an AGI, and we'd likely need an AGI to even consider something like the 3 laws, and it'd obviously be much muddier than fiction.
The point I guess I'm trying to make is that applying the 3 laws to an LLM is like wondering if your printer might one day find love. It isn't really relevant, they're designed for very specific specialized functions, and stuff like "don't kill humans" is pretty dumb instruction to give to an LLM since it can basically just answer questions in this context.
If it was going to kill somebody it would be through an error like hallucination or bad training data having it tell somebody something dangerously wrong. It's supposed to be right already. Telling it not to kill is telling your printer to not to rob the Office Depot. If it breaks that rule, something has already gone very wrong.
There I agree whole heartedly. LLM's seem to be touted as not only AI, but like, actual intelligence, which it most certainly is not.
You are not alone in that confusion. Ai is whatever a machine can't do at the moment. That is a famous paradox.
For example for years some philosophers claimed a computer could never beat the human masters of chess. They argued that you need a kind of intelligence for that, which machines cannot develop.
Turns out chess programs are relatively easy. Some time after that the unbeatable goal was Go. So many possibilities in Go. No machine can conquer that! Turns out they can.
Another unbeatable goal was natural language which we kinda solved now or are in the process of.
It's strange in the actual field of computer science we call all of the above AI while a lot of the public wants to call none that. My guess is it's just humans being conceited and arrogant. No machine (and no other animal mind you) is like us or can be like us (literally something you can read in peer reviewed philosophy texts).