this post was submitted on 08 Mar 2024
-4 points (16.7% liked)

technology

140 readers
1 users here now

founded 1 year ago
MODERATORS
 

This is strange to me. Did the students create the deepfake nudes or did software create those nudes? A normal image editor program won't just create explicit material on its own, the user has to do it with a mouse/tablet/whatever. But AI algorithms will. Even if the kids were giving the instructions to AI, why isn't the software and/or the company that runs it at least somewhat liable for creating child porn?

Suppose the students drew the nudes themselves, but they were really bad stick figure drawings with a name beside them to say who it's supposed to be? Is that illegal? What if they were really good artists and the drawing looked like a photo? At what point is the art considered good enough to be illegal?

top 1 comments
sorted by: hot top controversial new old
[–] Grangle1@lemm.ee 3 points 8 months ago

It's kind of like the gun argument: as the saying goes, "guns don't kill people, people kill people". You're correct that the AI needed the prompt to create the images, just like a gun generally needs some human input (pulling a trigger, pressing a button, etc) to fire. In pretty much every legal context, if someone uses a gun to kill someone else, the person who used the gun, not the gun manufacturer, is held responsible for the crime. Similarly, in cases where a person takes photos of someone else without consent, the photographer, not the camera maker, is responsible. It should go the same way here, no? The ones who gave the prompts to the AI to make those fakes, not the AI program or its developers, would be responsible in this instance. A gun, a camera, or an AI program here, is simply a tool that does what the person operating it tells it to do. The extent of the AI in the case of these images is to take the inputs given by the students and only do with said inputs what the students instruct. It won't produce any images or do anything else with its AI on its own. Therefore, the AI has no responsibility here.

That doesn't mean that can't change, however, especially when you take into account the purpose the tools were made for. Guns, to use the example again, are a simple tool that can be used for many purposes, but different kinds of guns are meant for specific purposes: some rifles and shotguns are meant for hunting animals, BB guns or pellet guns are generally used for games, and then there's the range of handguns to military rifles/machine guns that are meant to harm people. You won't find too many people who would support banning ALL guns, even the hunting rifles or BB guns, because most of them could hurt someone, but there are quite a few people out there who would ban the military grade stuff because hurting or killing people is what they're meant for. The utility of having guns for hunting or sport means not all guns should be banned. AI tools have a similar thing going on. Tools like Midjourney and Stable Diffusion are general image production tools designed to produce any kind of image one can think of, and are mainly meant to produce non-explicit types of images. There are other AI models out there that are meant for the explicit stuff, though. Should all AI image tools be banned because they're all capable of producing explicit images, even if only some are meant for that purpose, or should only those meant to produce explicit images be dealt with?