this post was submitted on 27 May 2025
2052 points (99.5% liked)

Programmer Humor

23891 readers
1563 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Jtotheb@lemmy.world 2 points 1 week ago (1 children)

If you would like to link some abstracts you find in a DuckDuckGo search that’s fine.

[–] CanadaPlus@lemmy.sdf.org 1 points 1 week ago (1 children)

I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.

[–] Jtotheb@lemmy.world 1 points 4 days ago (1 children)

That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

[–] CanadaPlus@lemmy.sdf.org 1 points 4 days ago* (last edited 1 day ago)

You can devise a task it couldn't have seen in the training data, I mean. Building a comprehensive argument out of them requires a lot more work and time.

You don’t even have access to the “thinking” side of the LLM.

Obviously, that goes for the natural intelligences too, so it's not really a fair thing to require.