The argument for current LLM AIs leading to AGI has always been that they would spontaneously develop independent reasoning, through an unknown emergent property that would appear as they scale. It hasn't happened, and there's no sign that it will.
That's a dilemma for the big AI companies. They are burning through billions of dollars every month, and will need further hundreds of billions to scale further - but for what in return?
Current LLMs can still do a lot. They've provided Level 4 self-driving, and seem to be leading to general-purpose robots capable of much useful work. But the headwinds look ominous for the global economy, - tit-for-tat protectionist trade wars, inflation, and a global oil shock due to war with Iran all loom on the horizon for 2025.
If current AI players are about to get wrecked, I doubt it's the end for AI development. Perhaps it will switch to the areas that can actually make money - like Level 4 vehicles and robotics.
Maybe if the author wouldn't write "AI did hit a wall" in 2022, when everything is just currently talking about diminishing return, then someone might habe taken him seriously a bit. However AI is complex and there are new approaches to speed up learning and result speed, different approaches to steer a model output. The tech is still too new to say what's up next. So complex even, that we might have months or years with no significant upgrade until a break through. Other than that it just reads as if the author wants to get back their reputation after making himself look like a negative Nancy. People forget that even the brain has hallucinations, but also layers in place to correct them.