this post was submitted on 12 Oct 2024
220 points (95.5% liked)
Technology
59438 readers
3071 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.
They're completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.
If they receive an input that doesn't have a strong correlation to their training, they just output whatever bullshit comes close, whether it's true or not. Which makes them truly dangerous.
And I highly doubt that'll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won't ever want their "state of the art AI chatbot" to answer a customer's question with "sorry, I don't know."
I can't wait for this stupid AI craze to eat its own tail.
Last I checked (which was a while ago) "AI" still can't pass the most basic of tasks such as "show me a blank image"/"show me a pure white image". the LLM will output the most intense fever dream possible but never a simple rectangle filled with #fff coded pixels. I'm willing to debate the potentials of AI again once they manage to do that without those "benchmarks" getting special attention in the training data.
Because it's not AI, it's sophisticated pattern separation, recognition, lossy compression and extrapolation systems.
Artificial intelligence, like any intelligence, has goals and priorities. It has positive and negative reinforcements from real inputs.
Their AI will be possible when it'll be able to want something and decide something, with that moment based on entropy and not extrapolation.
No. Intelligence does not necessitate goals. You are able to understand math, letters, words, meaning of those without pursuing a specific goal.
And our brains work in a similar way.
Our brains work in various ways. Somewhere in there a system similar to those "AI"'s exists, I agree. It's just only one part. Artificial dicks are not the same thing as artificial humans