sailor_sega_saturn

joined 2 years ago

Debating post-truth weirdos for large sums of money may seem like a good business idea at first, until you realize how insufferable the debate format is (and how no one normal would judge such a thing).

[–] sailor_sega_saturn@awful.systems 12 points 2 days ago (2 children)

Sadly all my best text encoding stories would make me identifiable to coworkers so I can't share them here. Because there's been some funny stuff over the years. Wait where did I go wrong that I have multiple text encoding stories?

That said I mostly just deal with normal stuff like UTF-8, UTF-16, Latin1, and ASCII.

[–] sailor_sega_saturn@awful.systems 18 points 2 days ago* (last edited 2 days ago) (4 children)

~~Senior software engineer~~ programmer here. I have had to tell coworkers "don't trust anything chat-gpt tells you about text encoding" after it made something up about text encoding.

Remember when you could read through all the search results on Google rather than being limited to the first hundred or so results like today? And boolean search operators actually worked and weren't hidden away behind a "beware of leopard" sign? Pepperidge Farm remembers.

[–] sailor_sega_saturn@awful.systems 12 points 4 days ago* (last edited 4 days ago) (1 children)

But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

So what harms has Mr. Yudkowski enumerated? Off the top of my head I can remember:

  1. Diamondoid bacteria
  2. What if there's like a dangerous AI in the closet server and it tries to convince you to connect your Nintendo 3DS to it so it can wreak havoc on the internet and your only job is to ignore it and play your nintendo but it's so clever and sexy
  3. What if we're already in hell: the hell of living in a universe where people get dust in their eyes sometimes?
  4. What if we're already in purgatory? If so we might be able to talk to future robot gods using time travel; well not real time travel, more like make believe time travel. Wouldn't that be spooky?

Ah yes, the journal of intelligence:

First, Kanazawa’s (2008) computations of geographic distance used Pythagoras’ theorem and so the paper assumed that the earth is flat (Gelade, 2008). Second, these computations imply that ancestors of indigenous populations of, say, South America traveled direct routes across the Atlantic rather than via Eurasia and the Bering Strait.

[–] sailor_sega_saturn@awful.systems 16 points 5 days ago (1 children)

In their defense you have to make money to spend money ^on^ ^castles^

Mirror bacteria? Boring! I want an evil twin from the negaverse who looks exactly like me except right hande-- oh heck. What if I'm the mirror twin?

[–] sailor_sega_saturn@awful.systems 10 points 1 week ago* (last edited 1 week ago) (3 children)

what the heck is an eigenrobot??

Update: It is too late, Sneerclub, I have seen everything.

[–] sailor_sega_saturn@awful.systems 6 points 1 week ago* (last edited 1 week ago)

I mean, unrestricted skepticism is the appropriate response to any press release, especially coming out of silicon valley megacorps these days.

Indeed, I've been involved in crafting a silicon valley megacorp press release before. I've seen how the sausage is made! (Mine was more or less factual or I wouldn't have put my name on it, but dear heavens a lot of wordsmithing goes into any official communication at megacorps)

[–] sailor_sega_saturn@awful.systems 12 points 1 week ago* (last edited 1 week ago) (2 children)

Maybe I'm being overzealous (I can do that sometimes).

But I don't understand why this particular experiment suggests the multiverse. The logic appears to be something like:

  1. This algorithm would take a gazillion years on a classical computer
  2. So maybe other worlds are helping with the compute cost!

But I don't understand this argument at all. The universe is quantum, not classical. So why do other worlds need to help with the compute? Why does this experiment suggest it in particular? Why does it make sense for computational costs to be amortized across different worlds if those worlds will then have to go on to do other different quantum calculations than ours? It feels like there's no "savings" anyway. Would a smaller quantum problem feasible to solve classically not imply a multiverse? If so, what exactly is the threshold?

[–] sailor_sega_saturn@awful.systems 19 points 1 week ago* (last edited 1 week ago) (16 children)

Can we all take a moment to appreciate this absolutely wild take from Google's latest quantum press release (bolding mine) https://blog.google/technology/research/google-willow-quantum-chip/

Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 10^25^ or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

The more I think about it the stupider it gets. I'd love if someone with an actual physics background were to comment on it. But my layman take is it reads as nonsense to the point of being irresponsible scientific misinformation whether or not you believe in the many worlds interpretation.

 

https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

http://web.archive.org/web/20240904174555/https://ssi.inc/

I have nothing witty or insightful to say, but figured this probably deserved a post. I flipped a coin between sneerclub and techtakes.

They aren't interested in anything besides "superintelligence" which strikes me as an optimistic business strategy. If you are "cracked" you can join them:

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

 

Follow up to https://awful.systems/post/1109610 (which I need to go read now because I completely overlooked this)

Now OpenAI has responded to Elon Musk's lawsuit with an email dump containing a bunch of weird nerd startup funding drama: https://openai.com/blog/openai-elon-musk

Choice quote from OpenAI:

As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

OpenAI have learned how to redact text properly now though, a pity really.

 

OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

Orange discuss: https://news.ycombinator.com/item?id=39207291

I don't have any particular section to call out. May post thoughts ~~tomorrow~~ today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.

Terrorists could use autocorrect according to OpenAI! Discuss!

 

Don't mind me I'm just here to silently scream into the void

Edit: I'm no good at linking to HN apparently, made link more stable.

view more: next ›