this post was submitted on 04 Apr 2025
353 points (88.1% liked)

Technology

68400 readers
2335 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] BrianTheeBiscuiteer@lemmy.world 10 points 2 days ago* (last edited 2 days ago) (1 children)

The other day I asked an llm to create a partial number chart to help my son learn what numbers are next to each other. If I instructed it to do this using very detailed instructions it failed miserably every time. And sometimes when I even told it to correct specific things about its answer it still basically ignored me. The only way I could get it to do what I wanted consistently was to break the instructions down into small steps and tell it to show me its pr.ogress.

I'd be very interested to learn it's "thought process" in each of those scenarios.

[–] LarmyOfLone@lemm.ee 1 points 2 days ago

It's like that "Joey Repeat After Me" meme from friends haha

[–] pennomi@lemmy.world 7 points 2 days ago (1 children)

This is great stuff. If we can properly understand these “flows” of intelligence, we might be able to write optimized shortcuts for them, vastly improving performance.

[–] LarmyOfLone@lemm.ee 1 points 2 days ago

Better yet, teach AI to write code replacing specific optimized AI networks. Then automatically profile and optimize and unit test!

[–] El_Azulito@lemmy.world 2 points 2 days ago

…Duh. 🤓

[–] Bell@lemmy.world 5 points 2 days ago

How can i take an article that uses the word "anywho" seriously?

[–] moonlight@fedia.io 4 points 2 days ago (3 children)

The math example in particular is very interesting, and makes me wonder if we could splice a calculator into the model, basically doing "brain surgery" to short circuit the learned arithmetic process and replace it.

[–] Not_mikey@slrpnk.net 3 points 2 days ago

I think a lot of services are doing this behind the scenes already. Otherwise chatgpt would be getting basic arithmetic wrong a lot more considering the methods the article has shown it's using.

[–] Nougat@fedia.io 3 points 2 days ago (1 children)

That math process for adding the two numbers - there's nothing wrong with it at all. Estimate the total and come up with a range. Determine exactly what the last digit is. In the example, there's only one number in the range with 5 as the last digit. That must be the answer. Hell, I might even use that same method in my own head.

The poetry example, people use that one often enough, too. Come up with a couple of words you would have fun rhyming, and build the lines around those words. Nothing wrong with that, either.

These two processes are closer to "thought" than I previously imagined.

[–] moonlight@fedia.io 9 points 2 days ago (1 children)

Well, it falls apart pretty easily. LLMs are notoriously bad at math. And even if it was accurate consistently, it's not exactly efficient, when a calculator from the 80s can do the same thing.

We have setups where LLMs can call external functions, but I think it would be cool and useful to be able to replace certain internal processes.

-

As a side note though, while I don't think that it's a "true" thought process, I do think there's a lot of similarity with LLMs and the human subconscious. A lot of LLM behaviour reminds me of split brain patients.

And as for the math aspect, it does seem like it does math very similarly to us. Studies show that we think of small numbers as discrete quantities, but big numbers in terms of relative size, which seems like exactly what this model is doing.

I just don't think it's a particularly good way of doing mental math. Natural intuition in humans and gradient descent in LLMs both seem to create layered heuristics that can become pretty much arbitrarily complex, but it still makes more sense to follow an exact algorithm for some things.

[–] dual_sport_dork@lemmy.world 6 points 2 days ago (1 children)

when a calculator from the 80s can do the same thing.

1970's! The little blighters are even older than most people think.

Which is why I find it extra hilarious / extra infuriating that we've gone through all of these contortions and huge wastes of computing power and electricity to ultimately just make a computer worse at math.

Math is the one thing that computers are inherently good at. It's what they're for. Trying to use LLM's to perform it halfassedly is a completely braindead endeavor.

[–] Jakeroxs@sh.itjust.works 1 points 2 days ago

But who is going around asking these bots to specifically do math? Like in normal usage, Ive never once done that because I could just use a calculator or spreadsheet software if I need to get fancy lol

load more comments (1 replies)
load more comments
view more: ‹ prev next ›