this post was submitted on 16 May 2025
485 points (94.0% liked)

Memes

50385 readers
1150 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] LovableSidekick@lemmy.world 11 points 4 days ago* (last edited 4 days ago) (4 children)

The AI haters will hate this, but I think AI is gonna provide the push that forces the fundamental changes we want. You can only replace so many people with AI and robots. The theoretical point of zero employees also means zero customers, because nobody has any money to buy anything, so making employees obsolete makes business and profits obsolete. In the real world the system will change long before that point, because it will have to. It might be from food riots and social breakdown, or political movements finally taking hold, I don't know, but AI will make the profit system eat itself. I'm just not looking forward to the extremely difficult transition period.

[–] drewcarreyfan@lemm.ee 6 points 3 days ago* (last edited 3 days ago) (1 children)

I want to believe you're right, but in a world where AI can fully replace human labor, that will likely also apply to the areas of mass surveillance and military suppression.

Imo, one of the scariest and most frustrating developments in robotics in the past 50 years is the ability to process billions of text and voice conversations, all at once, 24/7. Things really take a different tone when all of a sudden the US Government can find it feasible to listen to all of us, every time.

[–] LovableSidekick@lemmy.world 3 points 3 days ago

Yes, we're going to have these surveillance capabilities. Anti-AI memes and boycotts won't stop it. The rational choice is to develop authority structures the public can trust. Instead of treating the whole concept of authority as the enemy by default, we have to figure out a way to make it trustworthy. The question is how, and I don't have that answer but I know that's the question. I see it as kind of analogous to how providing basic income, healthcare, etc. for everybody would cut down on crimes of survival. When people aren't desperate they don't do desperate things. If making laws didn't attract money and prestige, greedy people wouldn't be part of it but public-spirited people would.

[–] explodicle@sh.itjust.works 8 points 4 days ago

I want to believe you're right. But everything else so far has just been a gradually applied multiplier on human labor, not a full replacement. Instead of a sudden tipping point, we'd watch each other become destitute one by one, perpetually looking out for only ourselves.

[–] jsomae@lemmy.ml 1 points 3 days ago (1 children)

I think you're talking about accelerationism. IMO, the main problem with unrestrained AI growth is that if AI turns out to be as good as the hype says it is, then we'll all be dead before revolution occurs.

[–] LovableSidekick@lemmy.world 1 points 3 days ago (1 children)

The trick is to judge things on their own merit and not on the hype around them.

[–] jsomae@lemmy.ml 1 points 3 days ago (1 children)

In that case, you should know that Geoff Hinton (the guy whose lab kicked off the whole AI revolution last decade) quit Google in order to warn about the existential risk of AI. He believes there's at least a 10% chance that it will kill us all within 30 years. Ilya Sutskever, his former student and co-founder of OpenAI, believes similarly, which is why he quit OpenAI and founded Safe Superintelligence (yes that basic html document really is their homepage) to help solve the alignment problem.

You can also find popular rationalist AI pundits like gwern, acx, yudkowsky, etc. voicing similar concerns, with a range of P(doom) from low to the laughably high.

[–] LovableSidekick@lemmy.world 1 points 2 days ago* (last edited 2 days ago) (1 children)

Yes I know, the robot apocalypse people seem desperate to be afraid of is always just around the corner. Geoff Hinton, while a definite pioneer in AI, didn't kick anything off, he was one of a large number of people working on it, and one of a small number predicting armageddon.

[–] jsomae@lemmy.ml 1 points 2 days ago (1 children)

The reason it's always just around the corner is because there is very strong evidence we're approaching the singularity. Why do you sound sarcastic saying this? What probability would you assign to an AI apocalypse in the next three decades?

Geoff Hinton absolutely kicked things off. Everybody else had given up on neural nets for image recognition, but his breakthrough renewed interest throughout the world. We wouldn't have deepdreaming slugdogs without him.

It should not be surprising that most people in the field of AI are not predicting armageddon, since it would be harmful to their careers to do so. Hinton is also not predicting the apocalypse -- he's saying 10-20% chance, which is actually a prediction that it won't happen.

[–] LovableSidekick@lemmy.world 1 points 1 day ago (1 children)

I'm sarcastic because I would assign the same probability as a zombie apocalypse. At the nuts and bolts level I think they're both technically flawed on a Hollywood fantasy level.

What does an AI apocalypse even look like to you? Computers launching nuclear missiles or what? Shutting down power grids?

[–] jsomae@lemmy.ml 1 points 1 day ago* (last edited 1 day ago) (1 children)

Please assign probabilities to the following (for the next 3 decades):

  1. probability an AI smarter than any human on any intellectual task a human can do might come to exist (superintelligence);
  2. given (1), probability it decides to kill all humans to achieve its goals (misaligned);
  3. given (2), probability it is successful at killing all humans;

bonus: given 1 and 2, probability that we don't even notice it wants to kill us, e.g. because we don't know how to understand what it's thinking.

Since the AI is smarter than me, I only need to propose one plausible method by which it could exterminate all humans. It can come up with a method at least as good as me, most likely something much better though. The typical answer here would be that it bio-engineers a lethal virus which is initially harmless (to avoid detection), but responds to some trigger like the introduction of a certain chemical or maybe a strong radio signal. If it's very smart, and has a very good understanding of bioengineering, it should be able to produce a virus like this by paying a laboratory to e.g. perform some CRISPR operations on some existing bacteria strain (or even just mix some chemicals together if Sagan turns out to be right about bioengineering) and mail a sample somewhere. It can wait until everyone is infected before triggering the strain.

[–] LovableSidekick@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

Or how about you don't assign me tasks and I don't do them? Cuz I don't remember signing up for a class.

[–] jsomae@lemmy.ml 1 points 22 hours ago (1 children)

Well, the probability you have for the AI apocalypse should ultimately be the product of those three numbers. I'm curious which of those is the one you think is so unlikely.

[–] LovableSidekick@lemmy.world 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

Okay here are my estimates:

1: 100% but I don't have a timeline. It's not going as fast as the cultural hype presents it. We don't even really understand human thinking yet, let alone how to make a computer do it. But I'm sure we'll get there eventually.

2: Also 100%. AI doesn't need to decide on its own to kill all humans, it could be assigned that goal by some maniac. The barrier to possessing sophisticated AI software is not nearly as high as the barrier to getting destructive nuclear weapons, biohazards, etc. Sooner or later I'm sure somebody who doesn't think humanity should exist will try to unleash a malevolent AI.

3: At or near zero, and I only include "or near" because mistakes happen. Automated systems that could potentially destroy the human race should always include physical links to people - for example, the way actually launching a nuclear missile requires physical actions by human beings. But of course there's always the incompetence factor - which could annihilate the human race without the help of AI.

You need not only propose a "plausible" scenario, you also need to present a reason to believe it will happen. It's plausible that a rogue faction could infiltrate the military, gain access to launch codes and deliberately start WWIII. It's plausible that a bio lab could create an organism that overcomes the human immune system and resists all medications. A nonzero chance of any of those happening isn't proof that they're inevitable, with or without AI.

[–] jsomae@lemmy.ml 1 points 2 hours ago

Well I'm not claiming that an AI-apocalypse is inevitable, just that it's possible enough we should start worrying about it now. As for the reason to believe it would happen -- isn't that covered by (2)? If you believe that (2) will occur with near-100% certainty, then that would be the impetus.

[–] gandalf_der_12te@discuss.tchncs.de 1 points 3 days ago* (last edited 3 days ago) (1 children)

I agree with you that AI will probably replace a lot of white-collar jobs by 2035, which is not that far away, and it will necessitate political change.

I also consider that UBI (Universal Basic Income) is probably the most natural way forward. It pays a constant amount to each person per month, based on money collected through a wealth tax. It does not have to be implemented all-at-once, but can be gradually introduced. I.e. only provide $200/(person*month) in the beginning, and then continuously scale up as needed.


The wealth tax is needed simultaneously because the money has to come from somewhere. Printing money anew is not great because it leads to steep inflation.

[–] LovableSidekick@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

Exactly, and as automation gradually makes profits obsolete, the wealth tax and UBI should evolve from money into a basic right to receive goods produced by the automation. Money is really just a middleman. If we eliminate scarcity we won't need it.