this post was submitted on 30 Oct 2023
518 points (94.7% liked)

Technology

59342 readers
5233 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
  • Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn't want to compete with open source, he added.
top 50 comments
sorted by: hot top controversial new old
[–] henfredemars@infosec.pub 71 points 1 year ago* (last edited 1 year ago) (8 children)

Some days it looks to be a three-way race between AI, climate change, and nuclear weapons proliferation to see who wipes out humanity first.

But on closer inspection, you see that humans are playing all three sides, and still we are losing.

[–] xapr@lemmy.sdf.org 36 points 1 year ago (3 children)

AI, climate change, and nuclear weapons proliferation

One of those is not like the others. Nuclear weapons can wipe out humanity at any minute right now. Climate change has been starting the job of wiping out humanity for a while now. When and how is AI going to wipe out humanity?

This is not a criticism directed at you, by the way. It's just a frustration that I keep hearing about AI being a threat to humanity and it just sounds like a far-fetched idea. It almost seems like it's being used as a way to distract away from much more critically pressing issues like the myriad of environmental issues that we are already deep into, not just climate change. I wonder who would want to distract from those? Oil companies would definitely be number 1 in the list of suspects.

[–] p03locke@lemmy.dbzer0.com 25 points 1 year ago* (last edited 1 year ago)

Agreed. This kind of debate is about as pointless as declaring self-driving cars are coming out in 5 years. The tech is way too far behind right now, and it's not useful to even talk about it until 50 years from now.

For fuck's sake, just because a chatbot can pretend it's sentient doesn't mean it actually is sentient.

Some large tech companies didn’t want to compete with open source, he added.

Here. Here's the real lead. Google has been scared of AI open source because they can't profit off of freely available tools. Now, they want to change the narrative, so that the government steps in regulates their competition. Of course, their highly-paid lobbyists will by right there to write plenty of loopholes and exceptions to make sure only the closed-source corpos come out on top.

Fear. Uncertainty. Doubt. Oldest fucking trick in the book.

[–] Stovetop@lemmy.world 6 points 1 year ago (2 children)

When and how is AI going to wipe out humanity?

With nuclear weapons and climate change.

[–] Lupus108@feddit.de 15 points 1 year ago

Uh nice a crossover episode for the series finale.

load more comments (1 replies)
[–] afraid_of_zombies@lemmy.world 3 points 1 year ago (3 children)

I don't think the oil companies are behind these articles. That is very much a wheels within wheels type thinking that corporations don't generally invest in. It is easier to just deny climate change instead of getting everyone distracted by something else.

load more comments (3 replies)
[–] shalafi@lemmy.world 9 points 1 year ago (2 children)

52-yo American dude here, no longer worried about nuclear apocalypse. Been there, done that, ain't seeing it. If y'all think geopolitics are fucked up now, 🎵"You should have seen it in color."🎶

We can close a time or three, but no one's insane enough to push the button, and no ONE can push the button. Even Putin in his desperation will be stymied by the people who actually have to push MULTIPLE buttons.

AI? IDGAF. Computers have power sources and plugs. Absolutely disastrous events could unfold, but enough people pulling enough plugs will kill any AI insurgency. Look at Terminator 2 and ask yourself why the AI had to have autonomous machines to win. I could take out the neighborhood power supply with a couple of suitable guns. I'm sure smarter people than I could shut down DCs.

Climate change? Sorry kids, it's too late and you are righteously fucked. Not saying we shouldn't go full force on mitigation efforts, but y'all haven't seen the changes I've seen in 50 years. Winters are clearly warmer, summers hotter, and I just got back from my camp in the swamp. The swamp is dry for the first time in 4 years.

And here's one you might not have personally experienced; The insects are disappearing. I could write an essay on bugs alone. And don't start me on wildlife populations.

load more comments (2 replies)
[–] Plague_Doctor@lemmy.world 3 points 1 year ago

I'm sitting here hoping that they all block each other out because they are all trying to fit through the door at the same time.

[–] LazaroFilm@lemmy.world 3 points 1 year ago

An ai will detonate nuclear weapons to change the climate into an eternal winter. Problem solved. All the win at the same time. No loosers… oh. Wait, no…

[–] burliman@lemm.ee 3 points 1 year ago

Then the errant gamma ray burst sneaks in for the kill.

load more comments (3 replies)
[–] MargotRobbie@lemmy.world 49 points 1 year ago

Why do you think Sam Altman is always using FUD to push for more AI restrictions? He already got his data collection, so he wants to make sure ""Open""AI is the only game in town and prevent any future competition from obtaining the same amount of data they collected.

Still, I have to give Zuck his credit here, the existence of open models like LLaMa 2 that can be fine-tuned and ran locally has really put a damper on OpenAI's plans.

[–] elias_griffin@lemmy.world 38 points 1 year ago (1 children)

"Ng said the idea that AI could wipe out humanity could lead to policy proposals that require licensing of AI"

Otherwise stated: Pay us to overregulate and we'll protect you from extinction. A Mafia perspective.

[–] ohlaph@lemmy.world 8 points 1 year ago

Right?!?!! Lines are obvious. Only if they thought they could get away with it, and they might, actually, but also what if?!?!

[–] uriel238@lemmy.blahaj.zone 31 points 1 year ago (2 children)

Restricting open source offerings only drives them underground where they will be used with fewer ethical considerations.

Not that big tech is ethical in its own right.

Bot fight!

[–] r3df0x@7.62x54r.ru 9 points 1 year ago (1 children)

Restricting AI would require surveillance on every computer all the time.

[–] ReveredOxygen@sh.itjust.works 2 points 1 year ago (2 children)

Eh sure to totally get rid of it, but taking it off GitHub would get rid of 90%

load more comments (2 replies)
[–] Buddahriffic@lemmy.world 5 points 1 year ago

I don't think there's any stopping the "fewer ethical considerations", banned or not. For each angle of AI that some people want to prevent, there are others who specifically want it.

Though there is one angle that does affect all of that. The more AI stuff happening in the open, the faster the underground stuff will come along because they can learn from the open stuff. Driving it underground will slow it down, but then you can still have it pop up when it's ready with less capability to counter it with another AI-based solution.

[–] joel_feila@lemmy.world 27 points 1 year ago (2 children)

they don't want to copete with open source. Yeah that is not new.

load more comments (2 replies)
[–] JadenSmith@sh.itjust.works 20 points 1 year ago* (last edited 1 year ago) (3 children)

Lol how? No seriously, HOW exactly would AI 'wipe out humanity'???

All this fear mongering bollocks is laughable at this point, or it should be. Seriously there is no logical pathway to human extinction by using AI and these people need to put the comic books down.
The only risks AI pose are to traditional working patterns, which have been always exploited to further a numbers game between Billionaires (and their assets).

These people are not scared about losing their livelihoods, but losing the ability to control yours. Something that makes life easier and more efficient requiring less work? Time to crack out the whips I suppose?

[–] BrianTheeBiscuiteer@lemmy.world 15 points 1 year ago* (last edited 1 year ago)

Working in a corporate environment for 10+ years I can say I've never seen a case where large productivity gains turned into the same people producing even more. It's always fewer people doing the same amount of work. Desired outputs are driven less by efficiency and more by demand.

Let's say Ford found a way to produce F150s twice as fast. They're not going to produce twice as many, they'll produce the same amount and find a way to pocket the savings without benefiting workers or consumers at all. That's actually what they're obligated to do, appease shareholders first.

[–] Plague_Doctor@lemmy.world 10 points 1 year ago

I mean I don't want an AI to do what I do as a job. They don't have to pay the AI and food and housing, in a lot of places, aren't seen as a human right, but a privilege you are allowed if you have money to buy it.

[–] warmaster@lemmy.world 5 points 1 year ago

Just wait and see

Remindmebot! 10 years

[–] people_are_cute@lemmy.sdf.org 15 points 1 year ago (1 children)

All the biggest tech/IT consulting firms that used to hire engineering college freshers by the millions each year have declared they either won't be recruiting at all this month, or will only be recruiting for senior positions. If AI were to wipe out humanity it'll probably be through unemployment-related poverty thanks to our incompetent policymakers.

[–] Socsa@sh.itjust.works 7 points 1 year ago* (last edited 1 year ago)

A technological revolution which disrupts the current capitalist standard through the elimination of labor scarcity, ultimately rendering the capital class obsolete isn't far off from Marx's original speculative endgame for historical materialism. All the other stuff beyond that is kind of wishy washy, but the original point about technological determinism has some legs imo

[–] Substance_P@lemmy.world 14 points 1 year ago (1 children)

When Google's annual revenue from its search engine is estimated to be around $70 to $80 billion, no wonder there is great concern from big tech about the numerous A.I tools out there, that would spell an end to that fire hose of sweet sweet monetization.

load more comments (1 replies)
[–] little_hermit@lemmus.org 11 points 1 year ago* (last edited 1 year ago) (1 children)

If you're wondering how AI wipes us out, you'd have to consider humanity's tendency to adopt any advantage offered in warfare. Nations are in perpetual distrust of each other -- an evolutionary characteristic of our tribal brains. The other side is always plotting to dominate you, take your patch of dirt. Your very survival depends on outpacing them! You dip your foot in the water, add AI to this weapons system, or that self-driving tank. But look, the other side is doing the same thing. You train even larger models, give them more control of your arsenal. But look, the other side is doing even more! You develop even more sophisticated AI models; your very survival depends on it! And then, one day, your AI model is so sophisticated, it becomes self aware...and you wonder where did it all go wrong.

[–] Salamendacious@lemmy.world 10 points 1 year ago (9 children)

So you're basically scared of skynet?

[–] jarfil@lemmy.world 9 points 1 year ago* (last edited 1 year ago) (4 children)

They went a bit too far with the argument... the AI doesn't need to become self-aware, just exceptionally efficient at eradicating "the enemy"... just let it loose from all sides all at once, and nobody will survive.

How many people are there in the world, who aren't considered an "enemy" by at least someone else?

load more comments (4 replies)
[–] little_hermit@lemmus.org 5 points 1 year ago (1 children)

Don't be ridiculous, time travel is impossible.

load more comments (1 replies)
load more comments (7 replies)
[–] AphoticDev@lemmy.dbzer0.com 11 points 1 year ago (2 children)

These dudes are convinced AI is gonna wipe us out despite the fact it can't even figure out the right number of fingers to give us.

We're so far away from this being a problem that it never will be, because climate change will have killed us all long before the machines have a chance to.

[–] TwilightVulpine@lemmy.world 6 points 1 year ago (2 children)

People may argue that AI is quickly improving on this but it will take a massive leap between a perfect diffusion model an Artificial General Intelligence. Fundamentally, those aren't even the same kind of thing.

But AI as it is today can already cause a lot of harm simply by taking over jobs that people need to make a living, on the lack of something like UBI.

Some people say this kind of Skynet fearmongering is nothing but another kind of marketing for AI investors. It makes its developments seem much more powerful than they actually are.

load more comments (2 replies)
load more comments (1 replies)
[–] Rooty@lemmy.world 9 points 1 year ago

File this under duuuuuuhhhhh

[–] fubo@lemmy.world 7 points 1 year ago (1 children)

The tech companies did not invent the AI risk concept. Culturally, it emerged out of 1990s futurism.

[–] SpaceNoodle@lemmy.world 8 points 1 year ago (2 children)

Karel Čapek wrote about it in 1920

[–] Taleya@aussie.zone 8 points 1 year ago

Asimov actively named it the Frankenstein Complex in the 40's/50's, Ellison wrote about AM in 60's.....this definitely isn't a 90's vanity.

[–] fubo@lemmy.world 3 points 1 year ago

Yeah, but I mean the AI risk stuff that people like Steve Omohundro and Eliezer Yudkowsky write about.

[–] ohlaph@lemmy.world 6 points 1 year ago

Not just that, but also that.

[–] FMT99@lemmy.world 5 points 1 year ago (1 children)

The Google Brain cofounder is not Big Tech?

[–] ripe_banana@lemmy.world 14 points 1 year ago (3 children)

Imo, Andrew Ng is actually a cool guy. He started coursera and deeplearning.ai to teach ppl about machine/deep learning. Also, he does a lot of stuff at Stanford.

I wouldn't put him in the corporate shill camp.

[–] maniclucky@lemmy.world 10 points 1 year ago (2 children)

I took several of his classes. At the very least he's an excellent instructor.

[–] elliot_crane@lemmy.world 7 points 1 year ago

He really is. He’s one of those rare instructors that can take the very complex and intricate topics and break them down into something that you can digest as a student, while still giving you room to learn and experiment yourself. In essence, an actual master at his craft.

I also agree with the comment that he doesn’t come across as the corporate shill type, much more like a guy that just really loves ML/AI and wants to spread that knowledge.

[–] AtHeartEngineer@lemmy.world 3 points 1 year ago

Same, I went from kind of understanding most of the concepts to grokking a lot of it pretty well. He's super good at explaining things.

load more comments (2 replies)
[–] Fedizen@lemmy.world 5 points 1 year ago (1 children)

meanwhile, capitalism is setting the world on fire

load more comments (1 replies)
[–] Socsa@sh.itjust.works 4 points 1 year ago* (last edited 1 year ago)

Ok, you know what? I'm in...

If all the crazy people in the world collectively stop spending crazy points on sky wizards and climate skepticism, and put all of their energy into AI doomerism, I legitimately think the world might be a better place.

[–] Fades@lemmy.world 3 points 1 year ago (1 children)

Obviously a part of the equation. All of these people with massive amounts of wealth power and influence push for horrific shit primarily because it’ll make them a fuck ton of money and the consequences won’t hit till they’re gone so fuck it

load more comments (1 replies)
load more comments
view more: next ›