333
When billion-dollar AIs break down over puzzles a child can do, it's time to rethink the hype
(www.theguardian.com)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
Why did anyone think that a LLM would be able to solve logic or math problems?
They’re literally autocomplete. Like, 100% autocomplete that is based on an enormous statistical model. They don’t think, they don’t reason, they don’t compute. They lay words out in the most likely order.
To be fair it’s pretty amazing they can do that from a user prompt - but it’s not doing whatever it is that our brains do. It’s not a brain. It’s not “intelligent”. LLMs are machine learning algorithms but they are not AI.
It’s a fucking hornswoggle, always has been 🔫🧑🚀
Many people in the world, they don't know the difference between an expert system and an LLM. Or, to phrase it a different way, many people think that AI is equivalent to generative AI.
I think that's largely a result of marketing bullshit and terrible reporting. Of course it would be good if people could educate themselves, but to some degree we expect that the newspaper won't totally fail us, and then when it does, people just don't realize they got played.
On a personal note, I'm a teacher, and some of my colleagues are furious that our students are using grammar checkers because they think grammar checkers are AI, and they think grammar checkers were invented in the last 3 years. It's really wild because some of these colleagues are otherwise smart people who I'm certain have personal experience with Microsoft Word 20 years ago, but they've blocked it out of their mind, because somehow they're afraid that all AI is evil.
They got very good results with just making the model bigger and train it on more data. It started doing stuff that was not programmed in the thing at all, like writing songs and having conversations, the sort of thing nobody expected an autocomplete to do. The reasoning was that if they keep making it bigger and feed it even more days, that the line would keep going up. The the fanboys believed it, investors believed it and many business leaders believed it. Until they ran out of data and datacenters.
it's such a weird stretch, honestly. songs and conversations are not different to predictive text, it's just more of it. expecting it to do logic after ingesting more text is like expecting a chicken to lay kinder eggs just because you feed it more.
It helped that this advanced autocorrect could get high scores on many exams at university level. That might also mean the exams don't test logic and reasoning as well as the teachers think they do.
Not necessarily do logic, but mimic it, like it can mimic coherent writing and basic conversation despite only being a statistical token muncher. The hope is that there's sufficient information in the syntax to model the semantics, in which case a sufficiently complex and well-trained model of the syntax is also an effective model of the semantics. This apparently holds up well for general language tasks, meaning "what we mean" is well-modeled by "how we say it." It's plausible, at face value, that rigorous argumentation is also a good candidate, which would give language models some way of mimicking logic by talking through a problem. It's just not very good in practice right now. Maybe a better language model could do better, maybe not for a reasonable cost.
Because that's how they're marketed and hyped. "The next version of ChatGPT will be smarter than a Nobel laureate" etc. This article is an indictment of the claims these companies make.
So fraud. It would be nice to get another FTX verdict at the very least. It could make those shit CEOs thinking twice before lying to peoples faces if it means years in prison.
In this administration? heh.
My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.
I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that's in all of us.
Yeah it has a name. The more you talk the more people Believe you are smart. It partly based on the tendency to believe what we hear first and then we check if it is.
I think you're right about that.
It didn't help that The Average Person has just shy of absolutely zero understanding of how computers work despite using them mostly all day every day.
Put the two together and it's a grifter's dream.
IMHO, if one's approach to the world is just - take it as it is and go with it - then probabilistic parrots creating the perceived elements of reality will work on that person because that's what they use to decide what to do next, but if one has an analytical approach to the world - wanting to figure out what's behind the façade to understand it and predict what might happen - then one will spot that the "logic" behind the façades created by the probabilistic parrots is segmented into little pieces of logic which are do not matched to the other little pieces of logic and do not add up to a greater building of logic (phrases are logic because all phrases have an inherent logic in how they are put together which is general, but the choice of which phrases get used in a higher logic which is far more varied than the logic inherent in phrases, so LLMs lose consistency at that level because the training material goes in a lot more directions at that level than it goes at the level of how phrases are put together).
I don't think the mechanisms of evolution are necessarily involved.
We're just not used to interacting with this type of pseudo intelligence.
My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.
Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.
And LLMs are not the first sophisticated AI that's been around. We've had AI for decades, and really good AI for a while. But people don't anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we're seeing a larger portion of the population believing that that we haven't seen in human behavior before.
my point is, evolution doesn't need to be involved in this paradigm. it could just be something children learn - this thing talks and is therefore more interactive than this other thing that doesn't talk.
Additionally, at the time in pre-history when assessing the intelligence of something could determine your life or death and thereby ability to reproduce, language may not have been a great indicator of intelligence. For example, if you encounter a band of whatever hominid encroaching on your territory, there may not be a lot of talking. You would know they were intelligent because they might have clothing or tools, but it's likely nothing would be said before the spears started to be thrown.
If you're not yet familiar with Ed Zitron, I think you'd enjoy either his newsletter or his podcast (or both).