SneerClub

989 readers
1 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
226
 
 

"I recommend just betting to maximize EV."

227
228
 
 

the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

as a v1, you'll notice a lot of jank. known issues are:

  • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
  • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
  • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
  • likewise, comments display a unix epoch instead of a rendered time
  • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
  • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

229
 
 

Been waiting to come back to the steeple of the sneer for a while. Its good to be back. I just really need to sneer, this ones been building for a long time.

Now I want to gush to you guys about something thats been really bothering me for a good long while now. WHY DO RATIONALISTS LOVE WAGERS SO FUCKING MUCH!?

I mean holy shit, theres a wager for everything now, I read a wager that said that we can just ignore moral anti-realism cos 'muh decision theory', that we must always hedge our bets on evidential decision theory, new pascals wagers, entirely new decision theories, the whole body of literature on moral uncertainty, Schwitzgebels 1% skepticism and so. much. more.

I'm beginning to think its the only type of argument that they can make, because it allows them to believe obviously problematic things on the basis that they 'might' be true. I don't know how decision theory went from a useful heuristic in certain situations and economics to arguing that no matter how likely it is that utilitarianism is true you have to follow it cos math, acausal robot gods, fuckin infinite ethics, basically providing the most egregiously smug escape hatch to ignore entire swathes of philosophy etc.

It genuinely pisses me off, because they can drown their opponents in mathematical formalisms, 50 page long essays all amounting to impenetrable 'wagers' that they can always defend no matter how stupid it is because this thing 'might' be true; and they can go off create another rule (something along the lines of 'the antecedent promulgation ex ante expected pareto ex post cornucopian malthusian utility principle) that they need for the argument to go through, do some calculus declare it 'plausible' and then call it a day. Like I said, all of this is so intentionally opaque that nobody other than their small clique can understand what the fuck they are going on about, and even then there is little to no disagreement within said clique!

Anyway, this one has been coming for a while, but I hope to have struck up some common ground between me and some other people here

230
231
 
 

I don't particularly disagree with the piece, but it's striking how little effort is put in to make this resemble a news piece or a typical Vox explainer. It's just blatant editorializing ("Please do this thing I want") and very blatantly carrying water for the--some how non-discredited--EA movement priorities.

232
 
 

he takes a couple pages to explain why he know that sightings of UFOs aren't alien because he can simply infer how superintelligent beings will operate + how advanced their technology is. he then undercuts his point by saying that he's very uncertain about both of those things, but wraps it up nicely with an excessively wordy speech about how making big bets on your beliefs is the responsible way to be a thought leader. bravo

233
234
 
 

@sneerclub

Greetings!

Roko called, just to say he's filed a trademark on Basilisk™ and will be coming after anyone who talks about it for licensing fees which will go into his special Basilisk™ Immanetization Fund and if we don't pay up we'll burn in AI hell forever once the Basilisk™ wakes up and gets around to punishing us.

Also, if you see your mom, be sure and tell her SATAN!!!!—

235
3
Universal Watchtowers (awful.systems)
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems
 
 

by Monkeon, from the b3ta Mundane Video Games challenge

236
 
 

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

237
 
 

This totally true anecdote features a friend who "can't recall the names of his parents [but] remember[s] the one thing he'd be safer forgetting."

238
 
 

Discussion on AI starts at about 17mins. The Bas(ilisk) drop happens at 20:30. Sorry if ads mess up my time stamps. I think this is the second time it’s come up on the show.

239
240
 
 

Source Tweet

@ESYudkowsky: Remember when you were a kid and thought you might have psychic powers, so you dealt yourself face-down playing cards and tried to guess whether they were red or black, and recorded your accuracy rate over several batches of tries?

|

And then remember how you had absolutely no idea to do stats at that age, so you stayed confused for a while longer?


Apologies for the usage of the japanese; but it is a very apt description: https://en.wikipedia.org/wiki/Chūnibyō,

241
 
 

really: https://archive.ph/p0jPI

Roko’s twitter is an absolutely reliable guide to how recently a woman with dyed hair and facial piercings kicked him in the nuts again

242
243
244
 
 

It will not surprise you at all to find that they protest just a tad too much.

See also: https://www.lesswrong.com/posts/ZjXtjRQaD2b4PAser/a-hill-of-validity-in-defense-of-meaning

245
 
 

I used to enjoy Ariely's books and others like him before I started reading better stuff. All that behavioural economics genre seems to be a good example of content that holds up as long as you don't read any more on the subject.

246
 
 

Ugh.

But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

247
248
 
 

Thought it worth sharing among so much very, very questionable material I've found in reading through the reference material of this book, I came across ths Blake Masters + Peter Thiel connection.

It's my obsession sneer because of how celebrated this god damn book is among the fight for the user UX community.

I’ve mostly been reading the material but need to back up and do an author background check for each one.

https://web.archive.org/web/20200101054932/https://blakemasters.com/post/20582845717/peter-thiels-cs183-startup-class-2-notes-essay

249
 
 

There were five posts on r/sneerclub about our very good friends at Leverage Research and many interesting URLs linking off them.

and here's the collected LessWrong on Leverage

250
view more: ‹ prev next ›