this post was submitted on 16 Sep 2024
25 points (77.8% liked)
Asklemmy
43851 readers
1700 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can't turn a spicy autocorrect into anything even remotely close to Jarvis.
It's not autocorrect, it's a text predictor. So I'd say you could definitely get close to JARVIS, especially when we don't even know why it works yet.
You're just being pedantic. Most autocorrects/keyboard autocompletes make use of text predictors to function. Look at the 3 suggestions on your phone keyboard whenever you type. That's also a text predictor (granted it's a much simpler one).
Text predictors (obviously) predict text, and as such don't have any actual understanding on the text they are outputting. An AI that doesn't understand its own outputs isn't going to achieve anything close to a sci-fi depiction of an AI assistant.
It's also not like the devs are confused about why LLMs work. If you had every publicly uploaded sentence since the creation of the Internet as a training reference I would hope the resulting model is a pretty good autocomplete, even to the point of being able to answer some questions.
Yes, autocorrect may use text predictors. No, that does not make text predictors "spicy autocorrect". The denotation may be correct, but the connotation isn't.
There's a large philosophical debate about whether we actually know what we're thinking, but I'm not going to get into that. All I'm going to elaborate on is the thought experiment of the Chinese room that posits that perhaps AI doesn't need to understand things to have apparent intelligence enough for most functions.
Yes they are. All they know is that if you train a text predictor a ton, at one point it hits a bottleneck of usability way below targets, and then one day it will suddenly surpass that bottleneck for no apparent reason.