this post was submitted on 11 Jun 2025
886 points (98.7% liked)

Lemmy Shitpost

32348 readers
4328 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] jsomae@lemmy.ml 10 points 3 days ago (3 children)

You've missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that there's a person there is irrelevant, and they could be replaced with a speaker or computer terminal.

Put differently, it's not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.

If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely can't be very good at anything are not comforting to me at all.

[–] kassiopaea@lemmy.blahaj.zone 3 points 2 days ago (1 children)

This. I often see people shitting on AI as "fancy autocomplete" or joking about how they get basic things incorrect like this post but completely discount how incredibly fucking capable they are in every domain that actually matters. That's what we should be worried about... what does it matter that it doesn't "work the same" if it still accomplishes the vast majority of the same things? The fact that we can get something that even approximates logic and reasoning ability from a deterministic system is terrifying on implications alone.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 2 days ago (3 children)

Why doesn't the LLM know to write (and run) a program to calculate the number of characters?

I feel like I'm missing something fundamental.

[–] OsrsNeedsF2P@lemmy.ml 2 points 1 day ago (1 children)

You didn't get good answers so I'll explain.

First, an LLM can easily write a program to calculate the number of rs. If you ask an LLM to do this, you will get the code back.

But the website ChatGPT.com has no way of executing this code, even if it was generated.

The second explanation is how LLMs work. They work on the word (technically token, but think word) level. They don't see letters. The AI behind it literally can only see words. The way it generates output is it starts typing words, and then guesses what word is most likely to come next. So it literally does not know how many rs are in strawberry. The impressive part is how good this "guessing what word comes next" is at answering more complex questions.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 1 day ago (1 children)

But why can't "query the python terminal" be trained into the LLM. It just needs some UI training.

[–] OsrsNeedsF2P@lemmy.ml 2 points 1 day ago

ChatGPT used to actually do this. But they removed that feature for whatever reason. Now the server that the LLM runs on doesn't isn't provide the LLM a Python terminal, so the LLM can't query it

[–] outhouseperilous@lemmy.dbzer0.com 1 points 2 days ago* (last edited 2 days ago) (1 children)

It doesn't know things.

It's a statistical model. It cannot synthesize information or problem solve, only show you a rough average of it's library of inputs graphed by proximity to your input.

[–] jsomae@lemmy.ml 1 points 2 days ago (1 children)

Congrats, you've discovered reductionism. The human brain also doesn't know things, as it's composed of electrical synapses made of molecules that obey the laws of physics and direct one's mouth to make words in response to signals that come from the ears.

Not saying LLMs don't know things, but your argument as to why they don't know things has no merit.

[–] outhouseperilous@lemmy.dbzer0.com 2 points 2 days ago (1 children)

Oh, that's why everything else you said seemed a bit off.

[–] jsomae@lemmy.ml 1 points 2 days ago

sorry, I only have a regular brain, haven't updated to the metaphysical edition :/

[–] jsomae@lemmy.ml 1 points 2 days ago* (last edited 2 days ago) (1 children)

The LLM isn't aware of its own limitations in this regard. The specific problem of getting an LLM to know what characters a token comprises has not been the focus of training. It's a totally different kind of error than other hallucinations, it's almost entirely orthogonal, but other hallucinations are much more important to solve, whereas being able to count the number of letters in a word or add numbers together is not very important, since as you point out, there are already programs that can do that.

At the moment, you can compare this perhaps to the Paris in the the Spring illusion. Why don't people know to double-check the number of 'the's in a sentence? They could just use their fingers to block out adjacent words and read each word in isolation. They must be idiots and we shouldn't trust humans in any domain.

[–] outhouseperilous@lemmy.dbzer0.com 2 points 2 days ago (1 children)

The most convincing arguments that llms are like humans aren't that llm's are good, but that humans are just unrefrigerated meat and personhood is a delusion.

[–] jsomae@lemmy.ml 1 points 2 days ago (1 children)

This might well be true yeah. But that's still good news for AI companies who want to replace humans -- bar's lower than they thought.

[–] outhouseperilous@lemmy.dbzer0.com 1 points 2 days ago (1 children)

And why we should fight them tooth and nail, yes.

They're not just replacing us, they're making us suck more so it's an easy sell.

[–] jsomae@lemmy.ml 1 points 2 days ago

Well yeah. You're preaching to the choir lol.

[–] UnderpantsWeevil@lemmy.world 1 points 2 days ago (2 children)

one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.

I'd be more impressed if the room could tell me how many "r"s are in Strawberry inside five minutes.

If one day we discover that the human brain works on much simpler principles

Human biology, famous for being simple and straightforward.

Ah! But you can skip all that messy biology abd stuff i don't understand that's probably not important, abd just think of it as a classical computer running an x86 architecture, and checkmate, liberal my argument owns you now!

[–] jsomae@lemmy.ml 1 points 2 days ago* (last edited 2 days ago) (1 children)

Because LLMs operate at the token level, I think it would be a more fair comparison with humans to ask why humans can't produce the IPA spelling words they can say, /nɔr kæn ðeɪ ˈizəli rid θɪŋz ˈrɪtən ˈpjʊrli ɪn aɪ pi ˈeɪ/ despite the fact that it should be simple to -- they understand the sounds after all. I'd be impressed if somebody could do this too! But that most people can't shouldn't really move you to think humans must be fundamentally stupid because of this one curious artifact. Maybe they are fundamentall stupid for other reasons, but this one thing is quite unrelated.

[–] UnderpantsWeevil@lemmy.world 1 points 1 day ago (1 children)

why humans can’t produce the IPA spelling words they can say, /nɔr kæn ðeɪ ˈizəli rid θɪŋz ˈrɪtən ˈpjʊrli ɪn aɪ pi ˈeɪ/ despite the fact that it should be simple to – they understand the sounds after all

That's just access to the right keyboard interface. Humans can and do produce those spellings with additional effort or advanced tool sets.

humans must be fundamentally stupid because of this one curious artifact.

Humans turns oatmeal into essays via a curios lump of muscle is an impressive enough trick on its face.

LLMs have 95% of the work of human intelligence handled for them and still stumble on the last bits.

[–] jsomae@lemmy.ml 1 points 1 day ago* (last edited 1 day ago) (1 children)

I mean, among people who are proficient with IPA, they still struggle to read whole sentences written entirely in IPA. Similarly, people who speak and read chinese struggle to read entire sentences written in pinyin. I'm not saying people can't do it, just that it's much less natural for us (even though it doesn't really seem like it ought to be.)

I agree that LLMs are not as bright as they look, but my point here is that this particular thing -- their strange inconsistency understanding what letters correspond to the tokens they produce -- specifically shouldn't be taken as evidence for or against LLMs being capable in any other context.

[–] UnderpantsWeevil@lemmy.world 1 points 1 day ago (1 children)

Similarly, people who speak and read chinese struggle to read entire sentences written in pinyin.

Because pinyin was implemented by the Russians to teach Chinese to people who use Cyrillic characters. Would make as much sense to call out people who can't use Katakana.

[–] jsomae@lemmy.ml 1 points 1 day ago

More like calling out people who can't read romaji, I think. It's just not a natural encoding for most Japanese people, even if they can work it out if you give them time.

[–] outhouseperilous@lemmy.dbzer0.com 0 points 2 days ago (1 children)

Its not a fucking riddle, it's a koan/thought experiment.

It's questioning what 'communication' fundamentally is, and what knowledge fundamentally is.

It's not even the first thing to do this. Military theory was cracking away at the 'communication' thing a century before, and the nature of knowledge has discourse going back thousands of years.

[–] jsomae@lemmy.ml 1 points 2 days ago (1 children)

You're right, I shouldn't have called it a riddle. Still, being a fucking thought experiment doesn't preclude having a solution. Theseus' ship is another famous fucking thought experiment, which has also been solved.

[–] outhouseperilous@lemmy.dbzer0.com 0 points 2 days ago (1 children)

'A solution'

That's not even remotely the point. Yes there are nany valid solutions. The point isn't to solve it, but what how you solve it says about and clarifies your ideas.

[–] jsomae@lemmy.ml 1 points 2 days ago (1 children)

I suppose if you're going to be postmodernist about it, but that's beyond my ability to understand. The only complete solution I know to Theseus' Ship is "the universe is agnostic as to which ship is the original. Identity of a composite thing is not part of the laws of physics." Not sure why you put scare quotes around it.

[–] outhouseperilous@lemmy.dbzer0.com 1 points 2 days ago (1 children)

For different value sets and use cases, dear.

[–] jsomae@lemmy.ml 1 points 2 days ago* (last edited 2 days ago)

as I said, postmodernist lol. I'm coming from the absolutist angle.

I'll admit though that it also functions to tell you about how someone thinks about the universe. But this is true of any question which has one right answer.