this post was submitted on 26 Aug 2024
17 points (100.0% liked)
TechTakes
1432 readers
103 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oh god is this the first time we have to sneer at a 404 article? Let's hope it will be the last.
It's running at frames per second, not seconds per frame. so it's not too energy intensive compared with the generative versions.
Ah yes, issues appear when shooting an enemy, in a shooter game. Definitely not proof that the technology falls apart when it's made to do the thing that it was created to do.
e: The demos made me motion sick. Random blobs of colour appearing at random and floor textures shifting around aren't hallucinations?
yeah, this is weirdly sneerable for a 404 article, and I hope this isn’t an early sign they’ve enshittifying. let’s do what they should have and take a critical look at, ah, GameNGen, a name for their research they surely won’t regret
wow! it’s a shame that creating this model involved plagiarizing every bit of recorded doom footage that’s ever existed, exploited an uncounted number of laborers from the global south for RLHF, and burned an amount of rainforest in energy that also won’t be counted. but fuck it, sometimes I shop at Walmart so I can’t throw stones and this sounds cool, so let’s grab the source and see how it works!
just kidding, this thing’s hosted on github but there’s no source. it’s just a static marketing page, a selection of videos, and a link to their paper on arXiv, which comes in at a positively ultralight 10 LaTeX-formatted letter-sized pages when you ignore the many unhelpful screenshots and graphs they included
so we can’t play with it, but it’s a model implementing a game engine, right? so the evaluation strategy given in the paper has to involve the innovative input mechanism they’ve discovered that enables the model to simulate a gameplay loop (and therefore a game engine), right? surely that’s what convinced a pool of observers with more-than-random-chance certainty that the model was accurately simulating doom?
of course not. nowhere in this paper is their supposed innovation in input actually evaluated — at no point is this work treated experimentally like a real-time game engine. also, and you pointed this out already — were the human raters drunk? (honestly, I couldn’t blame them — I wouldn’t give a shit either if my mturk was “which of these 1.6 second clips is doom”) the fucking thing doesn’t even simulate doom’s main gameplay loop right; dead possessed marines just turn to a blurry mess, health and armor don’t make sense in any but the loosest sense, it doesn’t seem to think imps exist at all but does randomly place their fireballs where they should be, and sometimes the geometry it’s simulating just casually turns into a visual paradox. chances are this experimental setup was tuned for the result they wanted — they managed to trick 40% of a group of people who absolutely don’t give a fuck that the incredibly short video clip they were looking at was probably a video game. amazing!
if we ever get our hands on the code for this thing, I’m gonna make a prediction: it barely listens to input, if at all. the video clips they’ve released on their site and YouTube are the most coherent this thing gets, and it instantly falls apart the instant you do anything that wasn’t in its training set (aka, the instant you use this real-time game engine to play a game and do something unremarkably weird, like try to ram yourself through a wall)
"I'unno, I'm fuckin' wasted and guessin' at random."
"So, your P(doom) is 50%."
Fuck, you beat me to the P(doom) joke. Well done.