Came across this fuckin disaster on Ye Olde LinkedIn by 'Caroline Jeanmaire at AI Governance at The Future Society'
"I've just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027.
Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it's a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.
What makes this forecast exceptionally credible:
-
One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed
-
The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio
-
It makes concrete, testable predictions rather than vague statements that cannot be evaluated
The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.
As the authors state: "It would be a grave mistake to dismiss this as mere hype."
For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."

Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let's at least take a look inside for some of their deep quantitative reasoning...

....hmmmm....

O_O

The answer may surprise you!
I'm fascinated by the way they're hyping up Daniel Kokotajlo to be some sort of AI prophet. Scott does it here, but so does Caroline Jeanmaire in the OP's twitter link. It's like they all got the talking point (probably from Scott) that Daniel is the new guru. Perhaps they're trying to anoint someone less off-putting and awkward than Yud. (This is also the first time I've ever seen Scott on video, and he definitely gives off a weird vibe.)
Kokotajlo is a new name to me. What's his background? Prolific LW poster?
He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.
His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like
And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far
My own scoring:
I don't think any sane programmer or scientist would credit the current "prompt engineering" "skill set" with comparison to programming libraries, and AI agents still aren't what he was predicting for 2022.
There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.
Hahahaha, no... they are still losing money per customer, much less recouping training costs.
The safety researchers have made this one "true" by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that don't read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.
Emphasis on the word"contrive"
So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.
I can see how they are trying to anoint his as a prophet, but I don't think anyone not already drinking the kool aid will buy it.
I went three layers deep in his references and his references' references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:
It's the ideologized version of You're Prompting It Wrong. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.
gwern wrote:Bonus: a recent comment is skeptical:
Scott talks a bit about it in the video, but he was recently in the news as the guy who refused to sign a non-disparagement agreement when he left OpenAI, which caused them to claw back his stock options.