this post was submitted on 26 May 2025
195 points (100.0% liked)

Technology

38732 readers
356 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would “basically kill” the AI industry.

Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn’t feasible to ask for consent before ingesting their work first.

“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content, [if you] first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”

“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”

top 37 comments
sorted by: hot top controversial new old
[–] aramis87@fedia.io 107 points 2 days ago (2 children)

Well then maybe the AI industry deserves to die.

This is true almost every time someone says "but without ", these businesses couldn't survive! Same deal with all the spyware that's part of our daily lives now. If it's not possible for you to make a smart TV without spying on me, then cool, don't make smart TVs.

If your business model crumbles under the weight of ethics, then fuck your business model and fuck you.

Related: https://www.eff.org/deeplinks/2019/06/felony-contempt-business-model-lexmarks-anti-competitive-legacy

[–] themurphy@lemmy.ml 9 points 2 days ago (2 children)

There's a big difference in generative image AI, and then AI for lets say the medical industry, Deepmind etc.

And yes, you can ban the first without the other.

Going for AI as a whole makes no sense, and this politician also makes it seem like it's the same.

Saying AI is the same as just saying internet, when you want to ban a specific site.

[–] megopie@beehaw.org 4 points 1 day ago

There is a very interesting dynamic occurring, where things that didn’t used to be called AI have been rebranded as such, largely so companies can claim they’re “using AI” to make shareholders happy.

I think it behooves all of us to stop referring to things blankety as AI and specify specific technologies and companies as the problem.

[–] 30p87@feddit.org 5 points 1 day ago

Just call it ML then, like we used to, and what describes it best

[–] jmcs@discuss.tchncs.de 39 points 2 days ago

In other news, asking Nick Clegg before emptying out his home would kill the robbery industry.

[–] 30p87@feddit.org 86 points 2 days ago

Very good. Please do that. Now.

[–] nokturne213@sopuli.xyz 67 points 2 days ago

would ‘kill’ the AI industry

And nothing of value was lost.

[–] FergleFFergleson@infosec.pub 35 points 2 days ago

I'm starting to think we need to reframe this a little. Stop referring to "artists". It's not just lone, artistic types that are getting screwed here, it's literally everyone who has content that's been exposed to the Internet. Artists, programmers, scientists, lawyers, individuals, companies... everyone. Stop framing this as "AI companies versus artists" and start talking about it as "AI companies versus intellectual property right holders", because that's what this is. The AI companies are choosing to ignore IP law because it benefits them. If anyone, in any other context, tried to use this as a legal defense they would be laughed out of the courtroom.

[–] HelixDab2@lemm.ee 39 points 2 days ago

Nick Clegg says asking artists for use permission would ‘kill’ the AI industry

I fail to see any downside to this.

[–] tuhriel@infosec.pub 42 points 2 days ago

If your business modell only works if you don't follow any moral or official laws...it shouldn't exist!

Unfortunately, capitalism doesn't work like that...

[–] Kichae@lemmy.ca 36 points 2 days ago

I bet door-to-door salespeople would make way more money if they could just break into your homes, leave their junk on your table, and steal your credit card, and yet we don't let them do that.

[–] cygnus@lemmy.ca 46 points 2 days ago* (last edited 2 days ago)

Sounds pretty rapey to me.

[–] LandedGentry@lemmy.zip 38 points 2 days ago

This is the same shit sites like YouTube use to get out of being accountable for anything they do. “We are too big. It is unreasonable to ask us to follow the law. So our benchmark of what a good faith attempt should suffice.”

Motherfucker then don’t be so big! If I’m a real estate developer and my building collapses killing 100 people, I can’t go “my empire is too big. It is unreasonable to expect me to follow all the various codes and ordinances designed to keep people safe.“

[–] SplashJackson@lemmy.ca 22 points 2 days ago
[–] IllNess@infosec.pub 29 points 2 days ago (2 children)

Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models.

No, it should be the opposite. The creative community should have to opt in. AI can run off the uploaded pieces. Everything else is theft.

But he claimed it wasn’t feasible to ask for consent before ingesting their work first.

What the fuck...?! Send a fucking email. If you don't get an answer, then it's a "No". Learn to take no for an answer.

[–] tuhriel@infosec.pub 27 points 2 days ago

The big issue is that they don't just "do not ask", they also actively ignore if it if someone tells "no" upfront. E.g. in a robots.txt

[–] Saleh@feddit.org 13 points 2 days ago

Yeah, if they cant bother to check for an opt-in, how should we trust them to respect an opt-put?

[–] nous@programming.dev 27 points 2 days ago (1 children)

But why won't anyone think of the AI shareholders...

[–] 30p87@feddit.org 6 points 2 days ago

I do think of them! Though, I'm lucky that thoughts aren't subject to § 212 StGB.

[–] will_a113@lemm.ee 19 points 2 days ago (2 children)

Perhaps the government should collect money from the AI companies — they could call it something simple, like “taxes” — and distribute the money to anyone who had ever written something that made its way to the internet (since we can reasonably assume that everything posted online has now been sucked in to the slop machines)

[–] takeda@lemm.ee 15 points 2 days ago* (last edited 2 days ago)

I think the primary goal of LLM is to use it on social media to influence public opinion.

Notice that all companies that have social media are heavily invested in it. Also the recent fiasco with Grok taking about South African apartheid without being asked shows that such functionality is being added.

I think talking about it to replace white collar jobs is a distraction. Maybe it can some, but the "daydreaming" (such a nice word for bullshit) I think makes the technology not very useful in that direction.

[–] avidamoeba@lemmy.ca 6 points 2 days ago

What's a fucking shocking idea right? My mind is blown and I'm sure Mr. Clegg would be ecstatic when we tell him about it! /s

Greedy dumb mfkers.

[–] hperrin@lemmy.ca 12 points 2 days ago* (last edited 2 days ago)

Oh no wouldn’t that be a shame. /s

I’m sorry but if your industry requires that you commit a bunch of crimes to make money, it’s not a legitimate industry, it’s a criminal industry. We’ve had these for a long time, and generally they’re frowned upon, because the crimes are usually drugs, guns, murder, sex trafficking, or theft. When the crime is intellectual property theft, apparently we forget to care. Then again, same with wage theft.

[–] TachyonTele@lemm.ee 15 points 2 days ago

Then please, ask them.

[–] doleo@lemmy.one 11 points 2 days ago

"you would basically kill the AI industry in this country overnight.” Cutting through to the heart of the issue here, economic FOMO. "If we don't steal this data, someone else will".

[–] Ooops@feddit.org 5 points 2 days ago

And adhering to the law would kill my thriving "pay me a dollar and I allow you to club a billionaire to death"-business. So what?

[–] Peanutbjelly@sopuli.xyz 4 points 2 days ago (1 children)

Or maybe the solution is in dissolving the socio-economic class hierarchy, which can only exist as an epistemic paperclip maximizer. Rather than also kneecapping useful technology.

I feel much of the critique and repulsion comes from people without much knowledge of either art/art history, or AI. Nor even the problems and history of socio-economic policies.

Monkeys just want to be angry and throw poop at the things they don't understand. No conversation, no nuance, and no understanding of how such behaviours roll out the red carpet for continued 'elite' abuses that shape our every aspect of life.

The revulsion is justified, but misdirected. Stop blaming technology for the problems of the system, and start going after the system that is the problem.

[–] takeda@lemm.ee 4 points 2 days ago

IMO tech bros' main goal for this technology is to use it to manipulate public opinion on social media. It is perfect for it and the "daydreaming" (bullshitting) is perfect.

Notice that all social media are involved in it, Twitter was "sold" to xAI, the recent incident with Grok about South African apartheid. The 10 year ban to regulate it by states etc.

They talk about it increasing productivity (and are hoping that it could be used for that too), but if people would know it is meant for disinformation, they would be even more against skipping copyright for it.

[–] BlackRing@midwest.social 3 points 2 days ago
[–] huquad@lemmy.ml 2 points 2 days ago

I feel the same way about my Linux isos

[–] riskable@programming.dev 2 points 2 days ago* (last edited 2 days ago) (2 children)

From a copyright perspective, you don't need to ask for permission to train an AI. It's no different than taking a bunch of books you bought second-hand and throwing them into a blender. Since you're not distributing anything when you do that you're not violating anyone's copyright.

When the AI produces something though, that's when it can run afoul of copyright. But only if it matches an existing copyrighted work close enough that a judge would say it's a derivative work.

You can't copyright a style (writing, art, etc) but you can violate a copyright if you copy say, a mouse in the style of Mickey Mouse. So then the question—from a legal perspective—becomes: Do we treat AI like a Xerox copier or do we treat it like an artist?

If we treat it like an artist the company that owns the AI will be responsible for copyright infringement whenever someone makes a derivative work by way of a prompt.

If we treat it like a copier the person that wrote the prompt would be responsible (if they then distribute whatever was generated).

[–] jjjalljs@ttrpg.network 7 points 2 days ago

no different than taking a bunch of books you bought second-hand and throwing them into a blender.

They didn't buy the books. They took them without permission.

[–] BlameThePeacock@lemmy.ca 2 points 2 days ago (1 children)

A realistic take on the situation.

I fully agree, despite how much people hate AI, training itself isn't infringement based on how copyright laws are written.

I think we need to treat it as the copier situation, the person who is distributing the copyright infringing material is at fault, not the tool used to create it.

[–] OfCourseNot@fedia.io 1 points 1 day ago (1 children)

I agree with both of you but it's a bit more nuanced than that: what if someone not familiar with the original IPs asks for a 'space wizard' or an 'Italian plumber cartoon', it outputs Obi Wan or Mario, and they use it in their work? Who's getting sued by Disney or Nintendo?

[–] BlameThePeacock@lemmy.ca 1 points 1 day ago

I could fairly easily ask a human artist to draw me something that would infringe on a copyright for a character they had never even seen before. It would technically be against the law, but given that no other parties know about it, it's unlikely to ever get caught. The legal problems arise if I use that art in a visible fashion such that the copyright holder would find out, and then it would be me getting sued, not the artist.

[–] mctoasterson@reddthat.com 1 points 2 days ago

Maybe the question is how do you sanction other malign actors who intend to steal the data. We know China and others do not give a shit about (especially western) IP rights. Not sure if that really justifies us ignoring IP rights.