My coworker just gave me this rant the other day about AI.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
which do not think on their own,
Most humans don't either. But if think you are conflating two different things, intelligence (ability to reason) and consciousness (being able to do so on your own). I personally believe with both of those things, that spontaneously come to existence in our brains when they became complex enough, we are just quantitatively not very far away from creating networks complex enough ourselves. Big last breakthrough was the ability to create training data sets for AI with AI that don't make the models degenerate.
The term AI has been around longer than LLMs, and refers to a wide variety of different algorithms and approaches to automatically extracting and working with information.
LLMs are an AI technique, just like Bayesian networks for causal inference are an AI technique.
The issue isn't that we don't have "real" AI, it's that most people are misusing a general technical term, and then being indignant that it doesn't exactly match a very specific sub category (AGI, or artificial general intelligence)
You see the same thing with people calling cryptocurrency "crypto", even though that word is typically used among experts to refer to "cryptography", which is mostly not relevant to currency in the slightest.
This one's not on the tech people, it's on the people who keep missusing the words.
This has been a thing for a long time
Clippy was an assistant, Cortana was an intelligent assistant and Copilot is AI
None of these are accurate, it's always like a generation behind
Clippy just was, Cortana was an assistant, And copilot is an intelligent assistant
The next one they make could actually be AI
It's annoying because either all of it should be AI or none of it.
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).
How do you know that?
Humans possess an esoteric ability to create new ideas out of nowhere, never before thought of. Humans are also capable of inspiration, which may appear similar to the way that AI's remix old inputs into "new" outputs, but the rules of creativity aren't bound by any set parameters the way a LLM is. I'm going to risk making a comment that ages like milk and just spitball: true artificial intelligence that matches a human is impossible.