This is much more a TechTakes story than a NotAwfulTech one; let's keep the discussion over on the other thread:
Perhaps the most successful "sequel to chess" is actually the genre of chess problems, i.e., the puzzles about how Black can achieve mate in 3 (or whatever) from a contrived starting position that couldn't be seen in ordinary ("real") gameplay.
There are also various ways of randomizing the starting positions in order to make the memorized knowledge of opening strategies irrelevant.
Oh, and Bughouse.
Pouring one out for the local-news reporters who have to figure out what the fuck "timeless decision theory" could possibly mean.
The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.
And people believe this ... why? I mean, shouldn't the default assumption about anything anyone in AI says is that it's a lie?
Altman: Mr. President, we must not allow a bullshit gap!
Musk: I have a plan... Mein Führer, I can walk!
I would appreciate this too, frankly. The rabbit hole is deep, and full of wankers.
I asked ChatGPT, the modern apotheosis of unjustified self-confidence, to prove that .999… is less than 1. Its reply began “Here is a proof that .999… is less than 1.” It then proceeded to show (using familiar arguments) that .999… is equal to 1, before majestically concluding “But our goal was to show that .999… is less than 1. Hence the proof is complete.” This reply, as an example of brazen mathematical non sequitur, can scarcely be improved upon.
Michael Keaton bursts out of a grave It's sneer time!
the waste generation will expand to fill the available data centers
oops all data centers are full, we need to build more data centers