this post was submitted on 08 Jun 2025
815 points (95.3% liked)

Technology

71234 readers
4246 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 2) 50 comments
sorted by: hot top controversial new old
[–] RampantParanoia2365@lemmy.world 17 points 1 day ago* (last edited 1 day ago) (1 children)

Fucking obviously. Until Data's positronic brains becomes reality, AI is not actual intelligence.

AI is not A I. I should make that a tshirt.

[–] JDPoZ@lemmy.world 10 points 1 day ago (1 children)

It’s an expensive carbon spewing parrot.

[–] Threeme2189@lemmy.world 7 points 1 day ago

It's a very resource intensive autocomplete

[–] communist@lemmy.frozeninferno.xyz 11 points 1 day ago* (last edited 1 day ago) (16 children)

I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.

do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.

if someone can objectively answer "no" to that, the bubble collapses.

load more comments (16 replies)
[–] Jhex@lemmy.world 49 points 2 days ago (1 children)

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[–] GaMEChld@lemmy.world 20 points 2 days ago (7 children)

Most humans don't reason. They just parrot shit too. The design is very human.

[–] elbarto777@lemmy.world 25 points 2 days ago (3 children)

LLMs deal with tokens. Essentially, predicting a series of bytes.

Humans do much, much, much, much, much, much, much more than that.

load more comments (3 replies)
[–] skisnow@lemmy.ca 8 points 1 day ago (1 children)

I hate this analogy. As a throwaway whimsical quip it'd be fine, but it's specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it's lowered my tolerance for it as a topic even if you did intend it flippantly.

load more comments (1 replies)
load more comments (5 replies)
[–] bjoern_tantau@swg-empire.de 36 points 2 days ago* (last edited 15 hours ago)
[–] Auli@lemmy.ca 13 points 2 days ago

No shit. This isn't new.

[–] brsrklf@jlai.lu 46 points 2 days ago (2 children)

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

load more comments (2 replies)
[–] sev@nullterra.org 49 points 2 days ago (38 children)

Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

load more comments (38 replies)
load more comments
view more: ‹ prev next ›