TinyTimmyTokyo

joined 2 years ago
[–] TinyTimmyTokyo@awful.systems 27 points 1 month ago (3 children)

After minutes of meticulous research and quantitative analysis, I've come up with my own predictions about the future of AI.

[–] TinyTimmyTokyo@awful.systems 13 points 1 month ago (2 children)

"USG gets captured by AGI".

Promise?

[–] TinyTimmyTokyo@awful.systems 9 points 1 month ago (1 children)

Of course they use shitty AI slop as the background for their web page.

Like, what the hell is it even supposed to be? A mustachioed man writing in a journal in what appears to be a French village town square? Shadowy individuals chatting around an oddly incongruous fire pit? Guitar dude and listener sitting on invisible benches? I get that AI produces this kind of garbage all the time, but did the lesswrongers even bother to evaluate it for appropriateness?

[–] TinyTimmyTokyo@awful.systems 8 points 1 month ago (2 children)

This commenter may be saying something we already knew, but it's nice to have the confirmation that Anthropic is chock full of EAs:

(I work at Anthropic, though I don't claim any particular insight into the views of the cofounders. For my part I'll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn't personally have said them, but I think "a journalist goes through your public statements looking for the most damning or hypocritical things you've ever said out of context" is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)

[–] TinyTimmyTokyo@awful.systems 9 points 1 month ago (3 children)

Sorry, when she started taking Yud's claims to be a "renowned AI researcher" at face value, I noped out.

[–] TinyTimmyTokyo@awful.systems 10 points 1 month ago (5 children)

Hilarious. How much do you want to bet they vibe-coded the whole app.

[–] TinyTimmyTokyo@awful.systems 16 points 1 month ago

Amazing how many awful things are orange.

[–] TinyTimmyTokyo@awful.systems 13 points 1 month ago

I'm fine with the name. It's a good signifier that shit code has been written.

[–] TinyTimmyTokyo@awful.systems 8 points 2 months ago

LLMs producing garbage fiction? Oh Yud, he's getting close...

[–] TinyTimmyTokyo@awful.systems 13 points 2 months ago

One of the most important projects in the world. Somebody should fund it.

The Pioneer Fund (now the Human Diversity Foundation) has been funding this bullshit for years, Yud.

[–] TinyTimmyTokyo@awful.systems 7 points 3 months ago (1 children)

Lots of discussion on the orange site post about this today.

(I mentioned this in the other sneerclub thread on the topic but reposted it here since this seems to be the more active discussion zone for the topic.)

 

Rationalist check-list:

  1. Incorrect use of analogy? Check.
  2. Pseudoscientific nonsense used to make your point seem more profound? Check.
  3. Tortured use of probability estimates? Check.
  4. Over-long description of a point that could just have easily been made in 1 sentence? Check.

This email by SBF is basically one big malapropism.

 

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

 

[All non-sneerclub links below are archive.today links]

Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.

For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe:

Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the universe.

As you can see, he's really into superlatives. And Jordan Peterson:

Jordan is an intellectual titan who explores personality development and mythology using an evolutionary and neuroscientific lenses. He sifted through all the mythical and religious narratives, as well as the continental psychoanalysis and developmental psychology so you and I don’t have to.

At Burning Man, he dons a 7-year old alter ego named "Evergreen". Perhaps he has an infantilization fetish like Elon Musk:

Evergreen exists ephemerally during Burning Man. He is 7 days old and still in a very exploratory stage of life.

As he hinted in his tweet to Roko, he has an enlightened view about women and gender:

Men were once useful to protect women and children from strangers, and to bring home the bacon. Now the supermarket brings the bacon, and women can make enough money to raise kids, which again, they like more in the early years. So men have become useless.

And:

That leaves us with, you guessed, a metric ton of men who are no longer in families.

Yep, I guessed about 12 men.

 

Excerpt:

Richard Hanania, a visiting scholar at the University of Texas, used the pen name “Richard Hoste” in the early 2010s to write articles where he identified himself as a “race realist.” He expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.

He's also a big eugenics supporter:

“There doesn’t seem to be a way to deal with low IQ breeding that doesn’t include coercion,” he wrote in a 2010 article for AlternativeRight .com. “Perhaps charities could be formed which paid those in the 70-85 range to be sterilized, but what to do with those below 70 who legally can’t even give consent and have a higher birthrate than the general population? In the same way we lock up criminals and the mentally ill in the interests of society at large, one could argue that we could on the exact same principle sterilize those who are bound to harm future generations through giving birth.”

(Reminds me a lot of the things Scott Siskind has written in the past.)

Some people who have been friendly with Hanania:

  • Mark Andreessen, Silion Valley VC and co-founder of Andreessen-Horowitz
  • Hamish McKenzie, CEO of Substack
  • Elon Musk, Chief Enshittification Officer of Tesla and Twitter
  • Tyler Cowen, libertarian econ blogger and George Mason University prof
  • J.D. Vance, US Senator from Ohio
  • Steve Sailer, race (pseudo)science promoter and all-around bigot
  • Amy Wax, racist law professor at UPenn.
  • Christopher Rufo, right-wing agitator and architect of many of Florida governor Ron DeSantis's culture war efforts
 

Ugh.

But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

view more: ‹ prev next ›