483
this post was submitted on 17 Aug 2023
483 points (96.2% liked)
Technology
59235 readers
3384 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There’s no learning of concepts. That’s why models hallucinate so frequently. They don’t “know” anything, they’re doing a lot of math based on what they’ve seen before and essentially taking the best guess at what the next word is.
There very much is learning of concepts. This is completely provable. You can give it problems it has never seen before and it will come up with good solutions.
Very much like humans do. Many people think that somehow their brain is special, but really, you're just neurons behaving as neurons do, which can be modeled mathematically.
This take often denies that entropy soul or not is critically important for the types of intellegence thats not controlled by reward and punishment with an iron fist.
It sounds like you know english words but cannot compose them. I honestly cannot parse what you said.
We can’t even map the entirety of the brain of a mouse due to the scale of how neurons work. Mapping a human brain 1:1 will eventually happen, and that’s likely going to coincide with when I’m convinced AI is capable of individual thought and actual intelligence
Just saw this today. You should check it out, nitwit: https://www.theguardian.com/science/2023/aug/15/scientists-reconstruct-pink-floyd-song-by-listening-to-peoples-brainwaves
Edit: "nitwit" was uncalled for, but I do think you are an ignorant person.
You aren't magical. You don't have a soul that talks to Jesus. You're a bunch of organized electrical signals—a machine. Because your machine is carbon-based doesn't make you special.
Edit: Downvote all you want, but we're all still animals. Most people don't even believe that simple fact. Then again, most people don't even understand how their cellphone works.
I fundamentally disagree and if that’s your take on humanity I’m scared for our future.
There is a human element to us. I’m not spiritual at all. I believe when we die the lights just go out and we cease to exist. But there is undoubtedly a part of us that is still far from being replicated in a machine. I’m not saying it won’t happen, I’m saying we’re a long way from it and what we’re seeing out of current AI is nothing even close to resembling intelligence.
So when it happens, you'll change your mind? My point is that what we have today is based on interactions in the human brain: neural networks. You can say, "They're just guessing the next word based on mathematical models", but isn't that exactly what you're doing?
Point to the reason why what comes out of your mouth is any different. Is it because your network is bigger and more complicated? If that's the case GPT-4 is closer to being human than GPT-3 was, being a larger model.
I just don't get your point at all.
and if that is indeed the point: that the difference is simply size, then what does that law look like? surely it would need to specify a size of the relevant neural network that is able to derive works
but that’s then just an arbitrary number because we just don’t know what it would be
I don't even think that matters much, right? Current LLMs already out-compete humans at many tasks. I think we're already past the threshold, at least in some regards. That is to say, I don't think there is a hard line because it depends on what your testing criteria are.
couldn’t agree more!