Well then maybe the AI industry deserves to die.
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
This is true almost every time someone says "but without ", these businesses couldn't survive! Same deal with all the spyware that's part of our daily lives now. If it's not possible for you to make a smart TV without spying on me, then cool, don't make smart TVs.
If your business model crumbles under the weight of ethics, then fuck your business model and fuck you.
There's a big difference in generative image AI, and then AI for lets say the medical industry, Deepmind etc.
And yes, you can ban the first without the other.
Going for AI as a whole makes no sense, and this politician also makes it seem like it's the same.
Saying AI is the same as just saying internet, when you want to ban a specific site.
There is a very interesting dynamic occurring, where things that didn’t used to be called AI have been rebranded as such, largely so companies can claim they’re “using AI” to make shareholders happy.
I think it behooves all of us to stop referring to things blankety as AI and specify specific technologies and companies as the problem.
Just call it ML then, like we used to, and what describes it best
In other news, asking Nick Clegg before emptying out his home would kill the robbery industry.
Very good. Please do that. Now.
would ‘kill’ the AI industry
And nothing of value was lost.
I'm starting to think we need to reframe this a little. Stop referring to "artists". It's not just lone, artistic types that are getting screwed here, it's literally everyone who has content that's been exposed to the Internet. Artists, programmers, scientists, lawyers, individuals, companies... everyone. Stop framing this as "AI companies versus artists" and start talking about it as "AI companies versus intellectual property right holders", because that's what this is. The AI companies are choosing to ignore IP law because it benefits them. If anyone, in any other context, tried to use this as a legal defense they would be laughed out of the courtroom.
Nick Clegg says asking artists for use permission would ‘kill’ the AI industry
I fail to see any downside to this.
If your business modell only works if you don't follow any moral or official laws...it shouldn't exist!
Unfortunately, capitalism doesn't work like that...
I bet door-to-door salespeople would make way more money if they could just break into your homes, leave their junk on your table, and steal your credit card, and yet we don't let them do that.
Sounds pretty rapey to me.
This is the same shit sites like YouTube use to get out of being accountable for anything they do. “We are too big. It is unreasonable to ask us to follow the law. So our benchmark of what a good faith attempt should suffice.”
Motherfucker then don’t be so big! If I’m a real estate developer and my building collapses killing 100 people, I can’t go “my empire is too big. It is unreasonable to expect me to follow all the various codes and ordinances designed to keep people safe.“
Good
Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models.
No, it should be the opposite. The creative community should have to opt in. AI can run off the uploaded pieces. Everything else is theft.
But he claimed it wasn’t feasible to ask for consent before ingesting their work first.
What the fuck...?! Send a fucking email. If you don't get an answer, then it's a "No". Learn to take no for an answer.
The big issue is that they don't just "do not ask", they also actively ignore if it if someone tells "no" upfront. E.g. in a robots.txt
Yeah, if they cant bother to check for an opt-in, how should we trust them to respect an opt-put?
But why won't anyone think of the AI shareholders...
Perhaps the government should collect money from the AI companies — they could call it something simple, like “taxes” — and distribute the money to anyone who had ever written something that made its way to the internet (since we can reasonably assume that everything posted online has now been sucked in to the slop machines)
I think the primary goal of LLM is to use it on social media to influence public opinion.
Notice that all companies that have social media are heavily invested in it. Also the recent fiasco with Grok taking about South African apartheid without being asked shows that such functionality is being added.
I think talking about it to replace white collar jobs is a distraction. Maybe it can some, but the "daydreaming" (such a nice word for bullshit) I think makes the technology not very useful in that direction.
What's a fucking shocking idea right? My mind is blown and I'm sure Mr. Clegg would be ecstatic when we tell him about it! /s
Greedy dumb mfkers.
Oh no wouldn’t that be a shame. /s
I’m sorry but if your industry requires that you commit a bunch of crimes to make money, it’s not a legitimate industry, it’s a criminal industry. We’ve had these for a long time, and generally they’re frowned upon, because the crimes are usually drugs, guns, murder, sex trafficking, or theft. When the crime is intellectual property theft, apparently we forget to care. Then again, same with wage theft.
Then please, ask them.
"you would basically kill the AI industry in this country overnight.” Cutting through to the heart of the issue here, economic FOMO. "If we don't steal this data, someone else will".
And adhering to the law would kill my thriving "pay me a dollar and I allow you to club a billionaire to death"-business. So what?
Or maybe the solution is in dissolving the socio-economic class hierarchy, which can only exist as an epistemic paperclip maximizer. Rather than also kneecapping useful technology.
I feel much of the critique and repulsion comes from people without much knowledge of either art/art history, or AI. Nor even the problems and history of socio-economic policies.
Monkeys just want to be angry and throw poop at the things they don't understand. No conversation, no nuance, and no understanding of how such behaviours roll out the red carpet for continued 'elite' abuses that shape our every aspect of life.
The revulsion is justified, but misdirected. Stop blaming technology for the problems of the system, and start going after the system that is the problem.
IMO tech bros' main goal for this technology is to use it to manipulate public opinion on social media. It is perfect for it and the "daydreaming" (bullshitting) is perfect.
Notice that all social media are involved in it, Twitter was "sold" to xAI, the recent incident with Grok about South African apartheid. The 10 year ban to regulate it by states etc.
They talk about it increasing productivity (and are hoping that it could be used for that too), but if people would know it is meant for disinformation, they would be even more against skipping copyright for it.
Good.
I feel the same way about my Linux isos
From a copyright perspective, you don't need to ask for permission to train an AI. It's no different than taking a bunch of books you bought second-hand and throwing them into a blender. Since you're not distributing anything when you do that you're not violating anyone's copyright.
When the AI produces something though, that's when it can run afoul of copyright. But only if it matches an existing copyrighted work close enough that a judge would say it's a derivative work.
You can't copyright a style (writing, art, etc) but you can violate a copyright if you copy say, a mouse in the style of Mickey Mouse
. So then the question—from a legal perspective—becomes: Do we treat AI like a Xerox copier or do we treat it like an artist?
If we treat it like an artist the company that owns the AI will be responsible for copyright infringement whenever someone makes a derivative work by way of a prompt.
If we treat it like a copier the person that wrote the prompt would be responsible (if they then distribute whatever was generated).
no different than taking a bunch of books you bought second-hand and throwing them into a blender.
They didn't buy the books. They took them without permission.
A realistic take on the situation.
I fully agree, despite how much people hate AI, training itself isn't infringement based on how copyright laws are written.
I think we need to treat it as the copier situation, the person who is distributing the copyright infringing material is at fault, not the tool used to create it.
I agree with both of you but it's a bit more nuanced than that: what if someone not familiar with the original IPs asks for a 'space wizard' or an 'Italian plumber cartoon', it outputs Obi Wan or Mario, and they use it in their work? Who's getting sued by Disney or Nintendo?
I could fairly easily ask a human artist to draw me something that would infringe on a copyright for a character they had never even seen before. It would technically be against the law, but given that no other parties know about it, it's unlikely to ever get caught. The legal problems arise if I use that art in a visible fashion such that the copyright holder would find out, and then it would be me getting sued, not the artist.
Maybe the question is how do you sanction other malign actors who intend to steal the data. We know China and others do not give a shit about (especially western) IP rights. Not sure if that really justifies us ignoring IP rights.