this post was submitted on 06 Apr 2025
21 points (100.0% liked)

TechTakes

1799 readers
84 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this..)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 11 points 1 week ago* (last edited 1 week ago)
[–] scruiser@awful.systems 11 points 1 week ago (6 children)

I feel like some of the doomers are already setting things up to pivot when their most major recent prophecy (AI 2027) fails:

From here:

(My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.)

It starts with some rationalist jargon to say the author agrees but one year later...

AI 2027 knows this. Their scenario is unrealistically smooth. If they added a couple weird, impactful events, it would be more realistic in its weirdness, but of course it would be simultaneously less realistic in that those particular events are unlikely to occur. This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027, but the median narrative is probably around 2030 or 2031.

Further walking the timeline back, adding qualifiers and exceptions that the authors of AI 2027 somehow didn't explain before. Also, the reason AI 2027 didn't have any mention of Trump blowing up the timeline doing insane shit is because Scott (and maybe some of the other authors, idk) like glazing Trump.

I expect the bottlenecks to pinch harder, and for 4x algorithmic progress to be an overestimate...

No shit, that is what every software engineering blogging about LLMs (even the credulous ones) say, even allowing LLMs get better at raw code writing! Maybe this author is better in touch with reality than most lesswrongers...

...but not by much.

Nope, they still have insane expectations.

Most of my disagreements are quibbles

Then why did you bother writing this? Anyway, I feel like this author has set themselves up to claim credit when it's December 2027 and none of AI 2027's predictions are true. They'll exaggerate their "quibbles" into successful predictions of problems in the AI 2027 timeline, while overlooking the extent to which they agreed.

I'll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff, and -100 for not realizing the real reason why Scott didn't include any call out of that in AI 2027.

load more comments (6 replies)
[–] BigMuffin69@awful.systems 11 points 2 weeks ago (1 children)

:( looked in my old CS dept's discord, recruitment posts for the "Existential Risk Laboratory" running an intro fellowship for AI Safety.

Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.

[–] dgerard@awful.systems 9 points 2 weeks ago (7 children)

that's CFAR cult jargon right?

load more comments (7 replies)
[–] TinyTimmyTokyo@awful.systems 10 points 1 week ago (4 children)

Why do AI company logos look like buttholes?

(Blog post written by a crypto-turned-AI bro, but the observation is amusing.)

load more comments (4 replies)
[–] gerikson@awful.systems 9 points 2 weeks ago (4 children)

LW: "being a younger brother makes you gay, the Catholic hierarchy is full of younger brothers, ergo 80% of the Vatican is gay"

https://www.lesswrong.com/posts/ybwqL9HiXE8XeauPK/how-gay-is-the-vatican

[–] maol@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

The modern father of this literature is Ray Blanchard

🚨🚨🚨 Do not take Ray Blachard's work seriously !

load more comments (3 replies)
[–] Soyweiser@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Considering how rightwingers have tried to link gayness to pedophilia this is a subject I would avoid if I was them. E: and gwern just goes there.

load more comments (2 replies)
load more comments (2 replies)
[–] BlueMonday1984@awful.systems 9 points 1 week ago (1 children)

Fun fact: the rise of autoplag is now threatening the supply chain as well, as bad actors take advantage of LLM hallucinations to plant malware into people's programs.

load more comments (1 replies)
[–] Architeuthis@awful.systems 9 points 1 week ago* (last edited 1 week ago) (6 children)

Here's a screenshot of a skeet of a screenshot of a tweet featuring an unusually shit take on WW2 by Moldbug:

link

transcriptskeet by Joe Stieb: Another tweet that should have ended with the first sentence.

Also, I guess I'm a "World War Two enjoyer"

tweet by Curtis Yarvin: There is very very extensive evidence of the Holocaust.

Unfortunately for WW2 enjoyers, the US and England did not go to war to stop the Holocaust. They went to war to stop the Axis plan for world conquest.

There is no evidence of the Axis plan for world conquest.

edit: hadn't seen yarvin's twitter feed before, that's one high octane shit show.

[–] sailor_sega_saturn@awful.systems 11 points 1 week ago* (last edited 1 week ago) (4 children)

Oh gosh I looked up the post and that was a mistake. Actually Mr. Yarvin there is quote tweeting someone who dared, on twitter, to say that the Holocaust was real. The replies (including Yarvin's incorrect & off topic nonsense) being about what you'd expect from twitter nowadays. So much Holocaust denial.

load more comments (4 replies)
[–] gerikson@awful.systems 10 points 1 week ago (1 children)

ironically many "WW2 enjoyers" are big fans of German hardware, uniforms and tactics...

load more comments (1 replies)
[–] bitofhope@awful.systems 10 points 1 week ago

Unusually shit take on WW2 but not an unusually shit take from Moldbug. Just the usual level of shit.

[–] swlabr@awful.systems 9 points 1 week ago (2 children)

I think it's accurate that the US and England didn't join to stop the Holocaust. Sentence 3 is a little oversimplified, and sentence 4 is straight-up lunacy.

[–] swlabr@awful.systems 10 points 1 week ago (3 children)

I checked the link, the thread elaborates a bit more:

to which my understanding is: this is absolutely true as well.

load more comments (3 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] froztbyte@awful.systems 9 points 2 weeks ago (10 children)

pedal to the metal on the content and information theft, folks:

a photo taken on a huge banner advert on a building titled Bayfront Park. the ad reads "STOP HIRING HUMANS", with a tagline of "The Era Of AI Employees Is Here". the advert is from a company named artisan

seems it's this lot. despite their name, there appears to be almost nothing artful or artistic about them - it's all b2b shit for Selling Better

[–] veganes_hack@feddit.org 9 points 2 weeks ago (3 children)

somebody had to do the design + layout for that banner. i wonder what was going through their head then.

load more comments (3 replies)
load more comments (9 replies)
[–] Architeuthis@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

The kokotajlo/scoot thing apparently made it to the new york times.

So this is what that was about:

stubsack post from two months agoOn slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump’s term, so be on the lookout for that when it materializes I guess.

edit: also @gerikson is apparently a superforcaster

load more comments (4 replies)
[–] dgerard@awful.systems 9 points 2 weeks ago (7 children)

apparently a complete archive of scott siskind's old livejournal. found on the EA forum no less. https://archive.fo/fCFQx

load more comments (7 replies)
load more comments
view more: ‹ prev next ›