this post was submitted on 08 Jun 2025
282 points (94.1% liked)

Fuck AI

3116 readers
808 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Voroxpete@sh.itjust.works 21 points 1 week ago (1 children)

Testing shows that current models hallucinate more than previous ones. OpenAI rebeadged ChatGPT 5 to 4.5 because the gains were so meagre that they couldn't get away with pretending it was a serious leap forward. "Reasoning" sucks; the model just leaps to a conclusion as usual then makes up steps that sound like they lead to that conclusion; in many cases the steps and the conclusion don't match, and because the effect is achieved by running the model multiple times the cost is astronomical. So far just about every negative prediction in this article has come true, and every "hope for the future" has fizzled utterly.

Are there minor improvements in some areas? Yeah, sure. But you have to keep in mind the big picture that this article is painting; the economics of LLMs do not work if you're getting incremental improvements at exponential costs. It was supposed to be the exact opposite; LLMs were pitched to investors as a "hyperscaling" technology that was going to rapidly accelerate in utility and capability until it hit escape velocity and became true AGI. Everything was supposed to get more, not less, efficient.

The current state of AI is not cost effective. Microsoft (just to pick on one example) is making somewhere in the region of a few tens of millions a year off of copilot (revenue, not profit), on an investment of tens of billions a year. That simply does not work. The only way for that to work is not only for the rate of progress to be accelerating, but for the rate of accelleration to be accelerating. We're nowhere near close to that.

The crash is coming, not because LLMs cannot ever be improved, but because it's becoming increasingly clear that there is no avenue for LLMs to be efficiently improved.