Most humans don't reason. They just parrot shit too. The design is very human.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I hate this analogy. As a throwaway whimsical quip it'd be fine, but it's specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it's lowered my tolerance for it as a topic even if you did intend it flippantly.
Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive
No shit
I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.
if someone can objectively answer "no" to that, the bubble collapses.
does ANY model reason at all?
No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
No shit. This isn't new.
Why would they "prove" something that's completely obvious?
The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.
They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.
That's called science
Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.