gerikson

joined 2 years ago
[–] gerikson@awful.systems 13 points 1 day ago* (last edited 1 day ago) (11 children)

Here's an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don't Get Why Normies Don't Freak Out:

For quite a while, I've been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.

(Dude then goes on to try to game-theorize this, I didn't bother to poke holes in it)

The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of "omnicide" is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.

At least on commenter gets it:

Most people distinguish between intentional acts and shit that happens.

(source)

Edit never read the comments (again). The commenter referenced above obviously didn't feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice "save", dipshit.

[–] gerikson@awful.systems 12 points 2 days ago

yeah but have you considered how much it's worth that gramma can vibecode a todo app in seconds now???

[–] gerikson@awful.systems 7 points 4 days ago* (last edited 3 days ago)

Haven't really kept up with the pseudo-news of VC funded companies acquiring each other, but it seems Windsurf (previously been courted by OpenAI) is now gonna be purchased by the bros behind Devin.

[–] gerikson@awful.systems 8 points 4 days ago (1 children)

I found out about that too when I arrived at Reddit and it was translated to Swedish automatically.

[–] gerikson@awful.systems 3 points 5 days ago* (last edited 4 days ago) (1 children)

This isn't an original thought, but a better matrix for comparing the ideology (such as it is) of the current USG is not Nazi Germany but pre-war US right wing obsessions - anti-FDR and anti-New Deal.

This appears in weird ways, like this throwaway comment regarding the Niihau incident, where two ethnic Japanese inhabitants of Niihau helped a downed Japanese airman immediately after Pearl Harbor.

Imagine if you will, one of the 9/11 hijackers parachuting from the plane before it crashed, asking a random muslim for help, then having that muslim be willing to immediately get himself into a shootouts, commit arson, kidnappings, and misc mayhem.

Then imagine that it was covered in a media environment where the executive branch had been advocating for war for over a decade, and voices which spoke against it were systematically silenced.

(src)

Dude also credits LessOnline with saving his life due to unidentified <<>> shooting up his 'hood when he was there. Charming.

Edit nah he's a neo-Nazi (or at least very concerned about the fate of German PoWs after WW2):

https://www.lesswrong.com/posts/6BBRtduhH3q4kpmAD/against-that-one-rationalist-mashal-about-japanese-fifth?commentId=YMRcfJvcPWbGwRfkJ

[–] gerikson@awful.systems 11 points 6 days ago (5 children)

LW:

Please consider minimizing direct use of AI chatbots (and other text-based AI) in the near-term future, if you can. The reason is very simple: your sanity may be at stake.

Perfect. No notes.

[–] gerikson@awful.systems 8 points 1 week ago (2 children)

Also into Haskell. Make of that what you will.

[–] gerikson@awful.systems 14 points 1 week ago

LOL the mod gets snippy here too

This comment too is not fit for this site. What is going on with y'all? Why is fertility such a weirdly mindkilling issue?

"Why are there so many Nazis in my Nazi bar????"

[–] gerikson@awful.systems 12 points 1 week ago* (last edited 1 week ago) (8 children)

LessWrong's descent into right-wing tradwife territory continues

https://www.lesswrong.com/posts/tdQuoXsbW6LnxYqHx/annapurna-s-shortform?commentId=ueRbTvnB2DJ5fJcdH

Annapurna (member for 5 years, 946 karma):

Why is there so little discussion about the loss of status of stay at home parenting?

First comment is from user Shankar Sivarajan, member for 6 years, 1227 karma

https://www.lesswrong.com/posts/tdQuoXsbW6LnxYqHx/annapurna-s-shortform?commentId=opzGgbqGxHrr8gvxT

Well, you could make it so the only plausible path to career advancement for women beyond, say, receptionist, is the provision of sexual favors. I expect that will lower the status of women in high-level positions sufficiently to elevate stay-at-home motherhood.

[...]

EDIT: From the downvotes, I gather people want magical thinking instead of actual implementable solutions.

Granted, this got a strong disagree from the others and a tut-tut from Habryka, but it's still there as of now and not yeeted into the sun. And rats wonder why people don't want to date them.

[–] gerikson@awful.systems 14 points 1 week ago (5 children)

In the recent days there's been a bunch of posts on LW about how consuming honey is bad because it makes bees sad, and LWers getting all hot and bothered about it. I don't have a stinger in this fight, not least because investigations proved that basically all honey exported from outside the EU is actually just flavored sugar syrup, but I found this complaint kinda funny:

The argument deployed by individuals such as Bentham's Bulldog boils down to: "Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts".

"Of course such underhanded tactics are not present here, in the august forum promoting 10,000 word posts called Sequences!"

https://www.lesswrong.com/posts/tsygLcj3stCk5NniK/you-can-t-objectively-compare-seven-bees-to-one-human

[–] gerikson@awful.systems 9 points 1 week ago (4 children)

NYT covers the Zizians

Original link: https://www.nytimes.com/2025/07/06/business/ziz-lasota-zizians-rationalists.html

Archive link: https://archive.is/9ZI2c

Choice quotes:

Big Yud is shocked and surprised that craziness is happening in this casino:

Eliezer Yudkowsky, a writer whose warnings about A.I. are canonical to the movement, called the story of the Zizians “sad.”

“A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

Good news everyone, it's popular to discuss the Basilisk and not at all a profundly weird incident which first led peopel to discover the crazy among Rats

Rationalists like to talk about a thought experiment known as Roko’s Basilisk. The theory imagines a future superintelligence that will dedicate itself to torturing anyone who did not help bring it into existence. By this logic, engineers should drop everything and build it now so as not to suffer later.

Keep saving money for retirement and keep having kids, but for god's sake don't stop blogging about how AI is gonna kill us all in 5 years:

To Brennan, the Rationalist writer, the healthy response to fears of an A.I. apocalypse is to embrace “strategic hypocrisy”: Save for retirement, have children if you want them. “You cannot live in the world acting like the world is going to end in five years, even if it is, in fact, going to end in five years,” they said. “You’re just going to go insane.”

 

current difficulties

  1. Day 21 - Keypad Conundrum: 01h01m23s
  2. Day 17 - Chronospatial Computer: 44m39s
  3. Day 15 - Warehouse Woes: 30m00s
  4. Day 12 - Garden Groups: 17m42s
  5. Day 20 - Race Condition: 15m58s
  6. Day 14 - Restroom Redoubt: 15m48s
  7. Day 09 - Disk Fragmenter: 14m05s
  8. Day 16 - Reindeer Maze: 13m47s
  9. Day 22 - Monkey Market: 12m15s
  10. Day 13 - Claw Contraption: 11m04s
  11. Day 06 - Guard Gallivant: 08m53s
  12. Day 08 - Resonant Collinearity: 07m12s
  13. Day 11 - Plutonian Pebbles: 06m24s
  14. Day 18 - RAM Run: 05m55s
  15. Day 04 - Ceres Search: 05m41s
  16. Day 23 - LAN Party: 05m07s
  17. Day 02 - Red Nosed Reports: 04m42s
  18. Day 10 - Hoof It: 04m14s
  19. Day 07 - Bridge Repair: 03m47s
  20. Day 05 - Print Queue: 03m43s
  21. Day 03 - Mull It Over: 03m22s
  22. Day 19 - Linen Layout: 03m16s
  23. Day 01 - Historian Hysteria: 02m31s
 

Problem difficulty so far (up to day 16)

  1. Day 15 - Warehouse Woes: 30m00s
  2. Day 12 - Garden Groups: 17m42s
  3. Day 14 - Restroom Redoubt: 15m48s
  4. Day 09 - Disk Fragmenter: 14m05s
  5. Day 16 - Reindeer Maze: 13m47s
  6. Day 13 - Claw Contraption: 11m04s
  7. Day 06 - Guard Gallivant: 08m53s
  8. Day 08 - Resonant Collinearity: 07m12s
  9. Day 11 - Plutonian Pebbles: 06m24s
  10. Day 04 - Ceres Search: 05m41s
  11. Day 02 - Red Nosed Reports: 04m42s
  12. Day 10 - Hoof It: 04m14s
  13. Day 07 - Bridge Repair: 03m47s
  14. Day 05 - Print Queue: 03m43s
  15. Day 03 - Mull It Over: 03m22s
  16. Day 01 - Historian Hysteria: 02m31s
 

The previous thread has fallen off the front page, feel free to use this for discussions on current problems

Rules: no spoilers, use the handy dandy spoiler preset to mark discussions as spoilers

 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

 

Yes, I know it's a Verge link, but I found the explanation of the legal failings quite funny, and I think it's "important" we keep track of which obscenely rich people are mad at each other so we can choose which of their kingdoms to be serfs in.

 

Apologies for the link to The Register...

Dean Phillips is your classic ratfucking candidate, attempting to siphon off support from the incumbent to help their opponent. After a brief flare of hype before the (unofficial) NH primary, he seems to have flamed out by revealing his master plan too early.

Anyway, apparently some outfit called "Delphi" tried to create an AI version of him via a SuperPAC and got their OpenAI API access banned for their pains.

Quoth ElReg:

Not even the presence of Matt Krisiloff, a founding member of OpenAI, at the head of the PAC made a difference.

The pair have reportedly raised millions for We Deserve Better, driven in part by a $1 million donation from hedge fund billionaire Bill Ackman, who described his funding of the super PAC as "the largest investment I have ever made in someone running for office."

So the same asshole who is combating "woke" and DEI is bankrolling Phillips, supposed to be the new Bernie. Got it.

 

Years ago (we're talking decades) I ran into a small program that randomly generated raytraced images (think transparent orbs, lens flares, reflection etc), suitable for saving as wallpapers. It was a C/C++ program that ran on Linux. I've long since lost the name and the source code, and I wonder if there's anything like that out there now?

 

Rules: no spoilers.

The other rules are made up as we go along.

Share code by link to a forge, home page, pastebin (Eric Wastl has one here) or code section in a comment.

view more: next ›