this post was submitted on 08 Jun 2025
816 points (95.3% liked)

Technology

71276 readers
4401 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 2) 50 comments
sorted by: hot top controversial new old
[–] GaMEChld@lemmy.world 20 points 3 days ago (10 children)

Most humans don't reason. They just parrot shit too. The design is very human.

[–] skisnow@lemmy.ca 9 points 2 days ago (1 children)

I hate this analogy. As a throwaway whimsical quip it'd be fine, but it's specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it's lowered my tolerance for it as a topic even if you did intend it flippantly.

load more comments (1 replies)
[–] joel_feila@lemmy.world 7 points 2 days ago

Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

load more comments (8 replies)
[–] vala@lemmy.world 24 points 3 days ago
[–] communist@lemmy.frozeninferno.xyz 11 points 2 days ago* (last edited 2 days ago) (16 children)

I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.

do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.

if someone can objectively answer "no" to that, the bubble collapses.

load more comments (16 replies)
[–] reksas@sopuli.xyz 37 points 3 days ago (4 children)

does ANY model reason at all?

[–] 4am@lemm.ee 34 points 3 days ago (3 children)

No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.

load more comments (3 replies)
load more comments (3 replies)
[–] Auli@lemmy.ca 13 points 3 days ago

No shit. This isn't new.

[–] SplashJackson@lemmy.ca 24 points 3 days ago (1 children)
load more comments (1 replies)
[–] technocrit@lemmy.dbzer0.com 22 points 3 days ago* (last edited 3 days ago) (7 children)

Why would they "prove" something that's completely obvious?

The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

[–] yeahiknow3@lemmings.world 22 points 3 days ago* (last edited 3 days ago) (1 children)

They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

load more comments (1 replies)
[–] tauonite@lemmy.world 15 points 3 days ago

That's called science

load more comments (5 replies)
[–] ZILtoid1991@lemmy.world 11 points 3 days ago (3 children)

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›