this post was submitted on 20 Aug 2023
192 points (100.0% liked)

Technology

37719 readers
295 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FaceDeer@kbin.social 3 points 1 year ago

It is that simple.

No, it really isn't.

If you want to step back, let's step back. One of the earliest, simplest forms of "generative AI" is the Markov Chain algorithm. What you do with that is you take a large amount of training text and run it through a program to analyze it. What the program is looking for is the probability of specific words following other words.

So for example if it trained on the data "You must be the change you wish to see in the world", as it scanned through it would first go "ah, the word 'you' is 100% of the time followed by the word 'must'" and then once it got a little further in it would go "wait, now the word 'you' was followed by the word 'wish'. So 'you' is followed by 'must' 50% of the time and 'wish' 50% of the time."

As it keeps reading through training data, those probabilities are the only things that it retains. It doesn't store the training data, it just stores information about the training data. After churning through millions of pages of data it'll have a huge table of words and the associated probabilities of finding other specific words right after them.

This table does not in any meaningful sense "encode" the training data. There's nothing you can do to recover the training data from it. It has been so thoroughly ground up and distilled that nothing of the original training data remains. It's just a giant pile of word pairs and probabilities.

It's similar with how these more advanced AIs train up their neural networks. The network isn't "memorizing" pictures, it's learning concepts from them. If you train an image generator on a million images of cats you're teaching it what cat fur looks like under various lighting conditions, what shape cats generally have, what sorts of environments you usually see cats in, the sense of smug superiority and disdain that cats exude, and so forth. So when you tell the AI "generate a picture of a cat" it is able to come up with something that has a high degree of "catness" to it, but is not actually any specific image from its training set.

If that level of transformation is not enough for you and you still insist that the output must be considered a derivative work of the training data, well, you're going to take the legal system down an untenable rabbit hole. This sort of learning is what human artists do all the time. Everything is based on the patterns we learn from the examples we saw previously.