this post was submitted on 10 Dec 2023
156 points (85.1% liked)

Technology

71544 readers
3771 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] AstralPath@lemmy.ca 83 points 2 years ago (2 children)

He says as he conveniently ignores the existence of Boston Dynamics.

[–] modifier@lemmy.ca 38 points 2 years ago (1 children)

We're 15 years max from the inevitable "OpenAI + Boston Dynamics: Better Together" ad after they merge.

[–] Knusper@feddit.de 15 points 2 years ago (1 children)

I mean, at this rate, I'm imagining Microsoft will have hollowed out OpenAI in a few years, but I could see them buying Boston Dynamics, too, yes

[–] tutus@links.hackliberty.org 9 points 2 years ago (1 children)
[–] systemglitch@lemmy.world 2 points 2 years ago

It was a great Black Mirror episode.

load more comments (1 replies)
[–] teft@startrek.website 63 points 2 years ago (3 children)

What happens in the scenario where a super-intelligence just uses social engineering and a human is his arms and legs?

[–] CaptnNMorgan@reddthat.com 16 points 2 years ago (1 children)

I loved Eagle Eye when it came out, I was 10(?). I never ever see it get mentioned though, maybe it doesn't hold up idrk but the concept is great and shows exactly how that could happen

[–] pelespirit@sh.itjust.works 5 points 2 years ago (1 children)

Honest question, is this Eagle Eye? https://sh.itjust.works/post/10786110

They're calling it EagleAI

[–] ReveredOxygen@sh.itjust.works 3 points 2 years ago

Eagle Eye is a movie

[–] jeena@jemmy.jeena.net 3 points 2 years ago
[–] ikidd@lemmy.world 29 points 2 years ago (2 children)

Well, there's a complete lack of imagination for you.

[–] rosymind@leminal.space 13 points 2 years ago (3 children)

Seriously!

Oh, you've tasked AI with managing banking? K. All bank funds are suddenly corrupted. Oh, you've tasked AI with managing lights at traffic intersections? K, they're all green now. Oh, you've tasked AI with filtering 911 calls to dispachers? K, all real emergencies are on hold until disconnected

I could go on and on and on...

[–] intensely_human@lemm.ee 10 points 2 years ago (1 children)

You tasked AI with doing therapy for people? Congrats now humanity as a whole is getting more miserable.

[–] rosymind@leminal.space 6 points 2 years ago

I think this one's my favorite so far!

AI doc: "Please enter your problem" Patient: "Well, I feel depressed because I saw on Facebook that my x-girlfriend has a new guy" AI doc: "Interesting. I advise you to spend more time on social media. Have you checked her insta yet?"

[–] FishFace@lemmy.world 9 points 2 years ago (1 children)

I've got a great idea. Let's not do those things.

[–] rosymind@leminal.space 6 points 2 years ago (1 children)
[–] FishFace@lemmy.world 6 points 2 years ago (1 children)

My first act will be to grant a boon to anyone who sucked up to me before I was president. You're doing well!

load more comments (1 replies)
[–] ikidd@lemmy.world 9 points 2 years ago (2 children)

Oh, AI is running your water treatment plant? Or a chemical plant on the outskirts of the city? Or the nuclear plant?

Good luck with that.

[–] derpgon@programming.dev 9 points 2 years ago

Whatever a virus is able to do, AI can, theoretically, perform aswell. Ransomware, keylogging, social engineering (I'd argue this one is most likely - just look at people trusting whatever AI spits out with absolute confidence).

When you mentioned nuclear power plants, Stuxnet comes to mind for me.

[–] afraid_of_zombies@lemmy.world 2 points 2 years ago

I don't even think it would have to go that far. Just corrupt my laptop and the 20 or so guys like me across the industrial world. Put a logic bomb to go off in say 10 years. All those plants getting updated and maintained. 10 years from now every PLC fires itself. I could write the code in under an hour to make a PLC physically destroy itself on a certain date.

[–] mutch@discuss.tchncs.de 1 points 2 years ago (2 children)
[–] afraid_of_zombies@lemmy.world 10 points 2 years ago (1 children)

Social media corruption, blackmail and extortion, attacks on financial exchanges, compromising control systems for infrastructure, altering police records, messing with your taxes, changing prescriptions, changing you to legally dead, draining your bank account.

Given full control over computers some being could easily dump child porn on your personal devices and get a SWAT team to come out. That is just you, I am sure you have family and friends. So yeah you will do what that being says which includes giving it more power

load more comments (1 replies)
[–] Kolanaki@yiffit.net 4 points 2 years ago* (last edited 2 years ago) (1 children)

AI generated biological weapons.

[–] mutch@discuss.tchncs.de 1 points 2 years ago (1 children)
[–] reverendsteveii@lemm.ee 2 points 2 years ago (1 children)

Listen up, soldier: this is your President. Here are all of the authorization codes, so you know this is real. Launch all of our WMDs at this list of targets that will simultaneously do maximum damage to the targets while still giving them the ability to counterstrike, thus maximizing the overall body count.

--The AI

People will be its arms and legs.

[–] mutch@discuss.tchncs.de 1 points 2 years ago

So, stuff that's already possible.

[–] pastermil@sh.itjust.works 29 points 2 years ago (1 children)

Who needs arm & leg if you can make the humans kill each other?

[–] stewsters@lemmy.world 5 points 2 years ago (1 children)

Honestly you probably don't even need to exist to do that.

Humans have been trying hard to do that on their own.

[–] pastermil@sh.itjust.works 3 points 2 years ago

What are you talking about? We all live in peace and harmony here.

Gosh, I want to kill my neighbor.

[–] just_another_person@lemmy.world 27 points 2 years ago (2 children)

I think a sufficient "Doom Scenario" would be an AI that is widespread and capable enough to poison the well of knowledge we ask it to regurgitate back at us out of laziness.

[–] rsuri@lemmy.world 18 points 2 years ago

That's pretty much social media today.

[–] the_q@lemmy.world 4 points 2 years ago

You best start believing in societal collapse stories, cause you're in one.

[–] KISSmyOS@lemmy.world 21 points 2 years ago* (last edited 2 years ago) (1 children)

There will be more than enough humans willing to help AI kill the others first, before realizing that "kill all humans" actually meant "kill all humans".

[–] Gregorech@lemmy.world 2 points 2 years ago (1 children)

Still need humans for that sweet, sweet maintenance.

[–] systemglitch@lemmy.world 2 points 2 years ago

Temporarily.

[–] Boozilla@lemmy.world 19 points 2 years ago (2 children)

Meanwhile, the power grid, traffic controls, and myriad infrastructure & adjacent internet-connected software will be using AI, if not already.

[–] Lodespawn@aussie.zone 16 points 2 years ago

You have a very high opinion of the level of technology running power grids, traffic systems and other infrastructure in most parts of the world.

[–] the_q@lemmy.world 7 points 2 years ago

I'm pretty sure all of the things you listed run on Pentium 4s.

[–] pglpm@lemmy.ca 18 points 2 years ago (4 children)

"Bayesian analysis"? What the heck has this got to do with Bayesian analysis? Does this guy have an intelligence, artificial or otherwise?

Big word make sound smart

[–] Mahlzeit@feddit.de 3 points 2 years ago

It's likely a reference to Yudkowsky or someone along those lines. I don't follow that crowd.

[–] cygnosis@lemmy.world 3 points 2 years ago (1 children)

He's referring to the fact that the Effective Altruism / Less Wrong crowd seems to be focused almost entirely on preventing an AI apocalypse at some point in the future, and they use a lot of obscure math and logic to explain why it's much more important than dealing with war, homelessness, climate change, or any of the other issues that are causing actual humans to suffer today or are 100% certain to cause suffering in the near future.

load more comments (1 replies)
[–] intensely_human@lemm.ee 2 points 2 years ago

It’s hard to say for sure. He might.

[–] bionicjoey@lemmy.ca 13 points 2 years ago (1 children)

AI companies: "so what you're saying is we should build a killbot that runs on ChatGPT?"

[–] FartsWithAnAccent@lemmy.world 7 points 2 years ago

"We're not going to do that...

...because we already did!"

-Also AI companies probably

[–] greybeard@lemmy.one 10 points 2 years ago (1 children)

If an AI was sufficiently advanced, it could manipulate the stock market to gain a lot of wealth real fast under a corporation with falsified documents, then pay Chinese fab house to kick off the war machine.

[–] KevonLooney@lemm.ee 4 points 2 years ago (2 children)

Not really. There's no real way to manipulate other traders and they all use algorithms too. It's people monitoring algorithms doing most of the trading. At best, AI would be slightly faster at noticing patterns and send a note to a person who tweaks the algorithm.

People who don't invest forget: there has to be someone else on the other side of your trade willing to buy/sell. Like how do you think AI could manipulate housing prices? That's just stocks, but slower.

[–] r00ty@kbin.life 3 points 2 years ago (1 children)

On the one hand, yes. But on the other hand when a price hits a low there will (because it's a prerequisite for the low to happen) be people selling market to the bottom. On a high there will be people buying market to the top. And they'll be doing it in big numbers as well as small.

Yes, most of the movements are caused by algorithms, no doubt. But as the price moves you'll find buyer and seller matches right up to hitting the extremes.

AI done well could in theory both learn how to capitalise on these extremes by making smart trades faster, but also know how to trick algorithms and bait humans with their trades. That is, acting like a human with knowledge of the entire history to pattern match and acting in microseconds.

load more comments (1 replies)
load more comments (1 replies)
[–] reverendsteveii@lemm.ee 6 points 2 years ago

doesn't take a lot to imagine a scenario in which a lot of people die due to information manipulation or the purposeful disabling of safety systems. doesn't take a lot to imagine a scenario where a superintelligent AI manipulates people into being its arms and legs (babe, wake up, new conspiracy theory just dropped - roko is an AI playing the long game and the basilisk is actually a recruiting tool). doesn't take a lot to imagine an AI that's capable of seizing control of a lot of the world's weapons and either guiding them itself or taking advantage of onboard guidance to turn them against their owners, or using targeted strikes to provoke a war (this is a sub-idea of manipulating people into being its arms and legs). doesn't take a lot to imagine an AI that's capable of purposefully sabotaging the manufacture of food or medicine in such a way that it kills a lot of people before detection. doesn't take a lot to imagine an AI capable of seizing and manipulating our traffic systems in such a way to cause a bunch of accidental deaths and injuries.

But overall my rebuttal is that this AI doom scenario has always hinged on a generalized AI, and that what people currently call "AI" is a long, long way from a generalized AI. So the article is right, ChatGPT can't kill millions of us. Luckily no one was ever proposing that chatGPT could kill millions of us.

load more comments
view more: next ›