this post was submitted on 28 Mar 2025
39 points (100.0% liked)
TechTakes
1751 readers
63 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's just overtrained on the puzzle such that it mostly ignores your prompt. Changing a few words out doesn't change that it recognises the puzzle. Try writing it out in ASCII or uploading an image with it written or some other weird way that it hasn't been specifically trained on and I bet it actually performs better.
My dude what do you think ASCII is? Assuming we're using standard internet interfaces here and the request is coming in as UTF-8 encoded English text it is being written out in ASCII
Sneers aside, given that the supposed capability here is examining a text prompt and reason through the relevant information to provide a solution in the form of a text response this kind of test is, if anything, rigged in favor of the AI compared to some similar versions that add in more steps to the task like OCR or other forms of image parsing.
It also speaks to a difference in how AI pattern recognition compared to the human version. For a sufficiently well-known pattern like the form of this river-crossing puzzle it's the changes and exceptions that jump out. This feels almost like giving someone a picture of the Mona Lisa with aviators on; the model recognizes that it's 99% of the Mona Lisa and goes from there, rather than recognizing that the changes from that base case are significant and intentional variation rather than either a totally new thing or a 'corrupted' version of the original.
Exactly. It's overtrained on the test, ignoring the differences. If you instead used something it recognises but doesn't recognise as the test pattern (having the same tokens/embeddings) it will perform better. I'm not joking, it's a common tactic to get around censoring. You're just going around the issue. What I'm saying is they've trained the model so much on benchmarks that it is indeed dumber.
The machine I love can't be dumb, I love the machine and I can't love what is dumb.
another classic induncetive reasoning completed successfully!