this post was submitted on 13 Sep 2023
51 points (78.0% liked)

Technology

34877 readers
5 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Some argue that bots should be entitled to ingest any content they see, because people can.

you are viewing a single comment's thread
view the rest of the comments
[–] Gormadt@lemmy.blahaj.zone 7 points 1 year ago (1 children)

Don't we humans derive from our trained dataset: our lives?

If you had a human with no "trained dataset" they would have only just been born. But even then you run into an issue there as it's been shown that fetuses respond to audio stimulation while they're in the womb.

The question of consciousness is a really hard one for sure that we may never have an answer that everyone agrees on.

Right now we're in the infant days of AI.

[–] RickRussell_CA@kbin.social 1 points 1 year ago (2 children)

To be clear, I don't think the fundamental issue is whether humans have a training dataset. We do. And it includes copyrighted work. It also includes our unique sensory perceptions and lots of stuff that is definitely NOT the result of someone else's work. I don't think anyone would dispute that copyrighted text, pictures, sounds are integrated into human consciousness.

The question is whether it is ethical, and should it be legal, to feed copyrighted works into an AI training dataset and use that AI to produce material that replaces, displaces, or competes with the copyrighted work used to train it. Should it be legal to distribute or publish that AI-produced material at all if the copyright holder objects to the use of their work in an AI training dataset? (I concede that these may be two separate, but closely related, questions.)

[–] AEsheron@lemmy.world 4 points 1 year ago (1 children)

What level of abstraction is enough? Training doesn't store or reference the work at all. It derives a set of weights from it automatically. But what if you had a legion of interns manually deriving the weights and entering them in instead? Besides the impracticality of it, if I look at a picture, write down a long list of small adjustments, -2.343, -.02, +5.327, etc etc etc, and adjust the parameters of the algorithm without ever scanning it in, is that legal? If that is, does that mean the automation of that process is the illegal part?

[–] RickRussell_CA@kbin.social 1 points 1 year ago

Right now our understanding of derivative works is mostly subjective. We look at the famous Obama "HOPE" image, and the connection to the original news photograph from which it was derived seems quite clear. We know it's derivative because it looks derivative. And we know it's a violation because the person who took the news photograph says that they never cleared the photo for re-use by the artist (and indeed, demanded and won compensation for that reason).

Should AI training be required to work from legally acquired data, and what level of abstraction from the source data constitutes freedom from derivative work? Is it purely a matter of the output being "different enough" from the input, or do we need to draw a line in the training data, or...?

All good questions.

[–] Gormadt@lemmy.blahaj.zone 1 points 1 year ago (1 children)

We were talking about consciousness not AI created works and copyright but I do have some opinions on that.

I think that if an artist doesn't want their works included in an AI dataset then it is their right to say no.

And yeah all the extra data that we humans fundamentally aquire in life does change everything we make.

[–] RickRussell_CA@kbin.social 2 points 1 year ago

And yeah all the extra data that we humans fundamentally aquire in life does change everything we make.

I'd argue that it's the crucial difference. People on this thread are arguing like humans never make original observations, or observe anything new, or draw new conclusions or interpretations of new phenomena, so everything humans make must be derived from past creations.

Not only is that clearly wrong, but it also fails the test of infinite regress. If humans can only create from the work of other humans, how was anything ever created? It's a risible suggestion.