192
AI-Created Art Isn’t Copyrightable, U.S. Judge Says in Ruling That Could Give Hollywood Studios Pause
(www.hollywoodreporter.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
I agree. The types of AIs that we have today are nothing more than mixers of various mental conceptions to create something new. These mental conceptions comes with life experience and is influenced by a person's world view.
Once you remove this mental conception, will the AIs that we currently have today be able to thrive on their own? The answer is no.
When you look at it through the lens of the latest get-rich-quick-off-some-tech-that-few- people-understand grift, it makes perfect sense.
They naively see AI as a magic box that can make infinite "content" (which of course belongs to them for some reason, and is fair use of other people's copyrighted data for some reason), and infinite content = infinite money, just as long as you can ignore the fundamentals of economics and intellectual property.
People have invested a lot of their money and emotional energy into AI because they think it'll make them a return on investment.
Current AI models are 100% static. They do not change, at all. So trying to ascribe any kind of sentience to them or anything going in that direction, makes no sense at all, since the models fundamentally aren't capable of it. They learn patterns from the world and can mush them together in original ways, that's neat and might even be a very important step towards something more human-like, but AI is not people, but that's all they do. They don't think while you aren't looking and they aren't even learning while you are using them. The learning is a complete separate step in these models. Treating them like a person is fundamentally misunderstanding how they work.
AI can solve a lot of problems that are unsolvable by any other means. It also has made rapid progress over the last 10 years and seems to continue to do so. So it's not terribly surprising that there is hype about it.
Problem with that is, if you aren't developing AI right now, the competition will. It's just math. Even if you'd outlaw it, companies would just go to different countries. Technology is hard to stop, especially when it's clearly a superior solution to the alternatives.
Another problem is that "think about things" so far just hasn't been very productive. The problems AI can create are quite real, the solutions on the other side much less so. I do agree with Hinton that we should put way more effort into AI safety research, but at the same time I expect that to just show us more ways in which AI can go wrong, without providing anything to prevent it.
I am not terribly optimistic here, just look at how well we are doing with climate change, which is a much simpler problem with much easier solutions, and we are still not exactly close to actually solving it.
When I generate AI art I do so by forming a mental conception of what sort of image I want and then giving the AI instructions about what sort of image I want it to produce. Sometimes those instructions are fairly high-level, such as "a mouse wearing a hat", and other times the instructions are very exacting and can take the form of an existing image or sketch with an accompanying description of how I'd like the AI to interpret it. When I'm doing inpainting I may select a particular area of a source image and tell the AI "building on fire" to have it put a flaming building in that spot, for example.
To me this seems very similar to photography, except I'm using my prompts and other inputs to aim a camera at places in a latent space that contains all possible images. I would expect that the legal situation will eventually shake out along that line.
This particular lawsuit is about someone trying to assign the copyright for a photo to the camera that took it, which is just kind of silly on its face and not very relevant. Cameras can't hold copyrights under any circumstances.