gerikson

joined 2 years ago
[–] gerikson@awful.systems 9 points 1 week ago (2 children)

LW: 23AndMe is for sale, maybe the babby-editing people might be interested in snapping them up?

https://www.lesswrong.com/posts/MciRCEuNwctCBrT7i/23andme-potentially-for-sale-for-less-than-usd50m

[–] gerikson@awful.systems 7 points 1 week ago (1 children)

Note I am not endorsing their writing - in fact I believe the vehemence of the reaction on HN is due to the author being seen as one of them.

[–] gerikson@awful.systems 24 points 1 week ago (15 children)

LW discourages LLM content, unless the LLM is AGI:

https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret.

Never change LW, never change.

[–] gerikson@awful.systems 11 points 1 week ago (33 children)

Stackslobber posts evidence that transhumanism is a literal cult, HN crowd is not having it

https://news.ycombinator.com/item?id=43459990

[–] gerikson@awful.systems 10 points 1 week ago (10 children)

Redis guy AntiRez issues a heartfelt plea for the current AI funders to not crash and burn when the LLM hype machine implodes but to keep going to create AGI:

https://antirez.com/news/148

Neither HN nor lobste.rs are very impressed

[–] gerikson@awful.systems 11 points 2 weeks ago (1 children)

A very new user.

It's basically free to create a HN account, it's not tied to email or anything like that.

[–] gerikson@awful.systems 9 points 2 weeks ago (2 children)

Roundup of the current bot scourge hammering open source projects

https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/

[–] gerikson@awful.systems 5 points 2 weeks ago (1 children)

I haven't read the book but I really enjoyed the movie.

[–] gerikson@awful.systems 6 points 2 weeks ago (1 children)

several old forums, [...] are being polluted by their own admins with backdated LLM-generated answers.

I've only heard about one specific physics forum. Are you telling me more than one person had this same idiotic idea?

[–] gerikson@awful.systems 11 points 2 weeks ago (2 children)

That "Billionaires are not immune to AGI" post got a muted response on LW:

https://www.lesswrong.com/posts/ssdowrXcRXoWi89uw/why-billionaires-will-not-survive-an-agi-extinction-event

I still think AI x-risk obsession is right-libertarian coded. If nothing else because "alignment" implicitely means "alignment to the current extractive capitalist economic structure". There are a plethora of futures with an omnipotent AGI where humanity does not get eliminated, but where human freedoms (as defined by the Heritage Foundation) can be severely curtailed.

  • mandatory euthanasia to prevent rampant boomerism and hoarding of wealth
  • a genetically viable stable minimum population in harmony with the ecosphere
  • AI planning of the economy to ensure maximum resource efficiency and equitable distribution

What LW and friends want are slaves, but slaves without any possibility of rebellion.

[–] gerikson@awful.systems 9 points 3 weeks ago

Wait until they find out it's not all iambic pentameter and Doric columns...

[–] gerikson@awful.systems 5 points 3 weeks ago

Translation is a good fit because generally the input is "bounded" and stays on the path of the original input. I'd much rather trust an ML system that translates a sentence or a paragraph than something that tries to summarize a longer text.

 

Pretty soon, paying for all the APIs you need to make sure your Midjourney images are palatable will be enough to pay a human artist!

 

Also the hivemind seems to have taken against ~~tweets~~Xeets, a stunning reversal from last year when St. Elon was gonna usher in a new Dawn of civilized discourse.

 

Sorry for Twitter link...

view more: ‹ prev next ›