BigMuffin69
Thanks for sharing this. Sent it to my mother who has spent a large part of her career working with children who require special care like this, she really enjoyed it. Her take:
"Thanks for this. It is what I have done for 12 years. I wonder why they didn't use the devices that are already out there that do these things and can be customized. It is very cool tho" <- (my mommy)
Also, man why do I click on these links and read the LWers comments. It's always insufferable people being like, "woe is us, to be cursed with the forbidden knowledge of AI doom, we are all such deep thinkers, the lay person simply could not understand the danger of ai" like bruv it aint that deep, i think i can summarize it as follows:
hits blunt "bruv, imagine if you were a porkrind, you wouldn't be able to tell why a person is eating a hotdog, ai will be like we are to a porkchop, and to get more hotdogs humans will find a way to turn the sun into a meat casing, this is the principle of intestinal convergence"
Literally saw another comment where one of them accused the other of being a "super intelligence denier" (i.e., heretic) for suggesting maybe we should wait till the robot swarms coming over the hills before we declare its game over.
:'( sad one. feel bad for the bebe, being raised by insane people.
"Im 99% sure I will die in the next year because of super duper intelligence, but in a world where that doesnt happen i plan to live 1000 years" surely is a forecast. Surprised they don't break their own necks on the whiplash from this take.
Not sure! What is CFAR?
:( looked in my old CS dept's discord, recruitment posts for the "Existential Risk Laboratory" running an intro fellowship for AI Safety.
Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.
I heard new Gemini got the first question, so thats SOTA now*
*allegedly it came out the same day as the math olympiad so it twas fair, but who the fuck knows
Came across this fuckin disaster on Ye Olde LinkedIn by 'Caroline Jeanmaire at AI Governance at The Future Society'
"I've just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027. Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it's a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.
What makes this forecast exceptionally credible:
-
One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed
-
The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio
-
It makes concrete, testable predictions rather than vague statements that cannot be evaluated
The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.
As the authors state: "It would be a grave mistake to dismiss this as mere hype."
For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."
Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let's at least take a look inside for some of their deep quantitative reasoning...
....hmmmm....
O_O
The answer may surprise you!
I want this company to IPO so I can buy puts on these lads.
I did read one of Carlo's pop sci books back in the day, was a nice read. Iirc he's like one of the dudes all in on loop quantum gravity. Bet you'd know more about this than I do 😅