Wouldn't the YouTube algorithm add an unintentionally bias into the training data?
A lot of YouTubers talk about how they're having to adjust their content and style to maintain viewership numbers. Hence all the click bait thumbnails & captions.
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Wouldn't the YouTube algorithm add an unintentionally bias into the training data?
A lot of YouTubers talk about how they're having to adjust their content and style to maintain viewership numbers. Hence all the click bait thumbnails & captions.
Probably, but that assumes that the transcribers went from video to video following the algorithm. I'd suspect that they would randomize the videos they chose somehow figured out some other distribution.
But that is just a guess, you could be right.
"unintended"
"Now gamers, before I give you your chat prompt response, I have to tell you about today's sponsor, RAID SHADOW LEGENDS."
As messed up as it sounds, I kinda hope that they just let the AI run down the rabbit hole and it ended up trying to transcribe videos in different languages into English or ended up with the most bat shit insane content being pushed and it training on that.
Well there's your problem right there.
This is the best summary I could come up with:
OpenAI spokesperson Lindsay Held told The Verge in an email that the company curates “unique” datasets for each of its models to “help their understanding of the world” and maintain its global research competitiveness.
The Times article says that the company exhausted supplies of useful data in 2021, and discussed transcribing YouTube videos, podcasts, and audiobooks after blowing through other resources.
By then, it had trained its models on data that included computer code from Github, chess move databases, and schoolwork content from Quizlet.
The new policy was reportedly intentionally released on July 1st to take advantage of the distraction of the Independence Day holiday weekend.
It was also apparently limited in the ways it could use consumer data by privacy-focused changes it made in the wake of the Cambridge Analytica scandal.
But the companies’ other option is using whatever they can find, whether they have permission or not, and based on multiple lawsuits filed in the last year or so, that way is, let’s say, more than a little fraught.
The original article contains 650 words, the summary contains 171 words. Saved 74%. I'm a bot and I'm open source!