Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
Why?
Because LLM needs human-produced material to work with. If the incentive to produce such material drops, generative models will start producing garbage.
It has already started to be a problem with the current LLMs that have exhausted most easily reached sources of content on the internet and are now feeding off LLM-generated content, which has resulted in a sharp drop in quality.
"It has already started to be a problem with the current LLMs that have exhausted most easily reached sources of content on the internet and are now feeding off LLM-generated content, which has resulted in a sharp drop in quality."
Do you have any sources to back that claim? LLMs are rising in quality, not dropping, afaik.
It's still being researched but there are papers that show that, mathematically, generative models cannot feed on their own output. If you see an increase in quality it's usually because their developers have added a new trove of human-generated data.
In simple terms, these models need two things to be able to generate useful output: they need external guidance about which input is good and which is bad (throughout the process), and they need both types of input to reach a certain critical mass.
Since the reliability of these models is never 100%, with every input-output cycle the quality drops.
If the model input is very well curated and restricted to known good sources it can continue to improve (and by improve I mean asymptotically approach a value which is never 100% but high enough, like over 90%). But if models are allowed to feed on generative output (being thrown back at them by social bots and website generators) their quality will take a dive.
I want to point out that this is not an AI issue. Humans don't have a 100% correct output either, and we have the exact same problem – feeding on our own online garbage. For us the trouble started showing much slower, over the last couple of decades or so, as talk about "fake news", misinformation being weaponized etc.
AI merely accelerated the process, it hit the limits of reliability much sooner. We will need to solve this issue either way, and we would have needed to solve it even if AI weren't a thing. In a way the appearance of AI helped us because it forces us to deal with the issue of information reliability sooner rather than later.
I wouldn't be concerned about that, the mathematical models make assumptions that don't hold in the real world. There's still plenty of guidance in the loop from things such as humans up/downvoting, and people generating several to many pictures before selecting the best one to post. There's also as you say lots of places with strong human curation, such as wikipedia or official documentation for various tools. There's also the option of running better models as the tech progresses against old datasets.
Because the training, and therefore the datasets are an important part of the work with AI. A lot of ppl are arguing that therefore, the ppl who provided the data (e.g. artists) should get a cut of the revenue or a static fee or something similar for compensation. Because looking at a picture is deemed fine in our society, but copying it and using it for something else is seen more critically.
Btw. I am totally with you regarding the need to not hinder progress, but at the end of the day, we need to think about both the future prospects and the morality.
There was something about labels being forced to pay a cut of the revenue to all bigger artists for every CD they'd sell. I can't remember what it was exactly, but something like that could be of use here as well maybe.
Let's be clear. The ai does not in any way "copy" the picture it is trained on.
Yes.
And let's also pin down that this is the exact issue we need more laws on. What makes an image copyrightable? When can a copyright get violated? And more specifically: whatever the AI model encompasses, can that inhibit fully copyrighted material? Can a copyrighted image be assumed by noting down all of its features?
This is the exact corner that we are fighting over currently.
This has already been decided. Inspired works are not covered by copyright.
Inspired in the traditional sense or inspired on a basis of datasets with concrete numbers? Huge difference.
Lol not at all.