this post was submitted on 07 Mar 2024
569 points (97.7% liked)
Technology
59300 readers
4640 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They almost certainly do, if only because of the practicalities of adding a new comment, then having that be fetched in place of the old one, compared to making and propagating an edit across all their databases. With exceptions, it'd be a bit easier to implement it as an additional comment, and increment a version number that you fetch the latest version of, rather than needing to scan through the entire database to make changes.
It would also help with any administration/moderation tasks if they could see whether people posted rule-breaking content and then tried to hide it behind edits.
That said, one of the many Spez controversies did show that they are capable of making actual edits on the back end if they wished.
If this is true, it shifts the problem from "not having it" to "not knowing which version should be used" (to train the LLM).
They could feed it the unedited versions and call it a day, but a lot of times people edit their content to correct it or add further info, specially for "meatier" content (like tutorials). So there's still some value on the edits, and I believe that Google will be at least tempted to use them.
If that's correct, editing it with nonsense will lower the value of edited comments for the sake of LLM training. It should have an impact, just not as big as if they kept no version system.
I know from experience (I'm a former Reddit janny) that moderators can't see earlier versions of the content, only the last one. The admins might though.
The one from TD, right?
Honestly, parsing through version history is actually something an LLM could handle. It might even make more sense of it than without. For example, if someone replies to a comment and then the parent is edited to say something different. No one will have to waste their time filtering anything.
They could use an LLM to parse through the version history of all those posts/comments, to use it to train another LLM with it. It sounds like a bad (and expensive, processing time-wise) idea, but it could be done.
EDIT: thinking further on this, it's actually fairly doable. It's generally a bad idea to feed the output of an LLM into another, but in this case you're simply using it to pick one among multiple versions of a post/comment made by a human being.
It's still worth to scorch the earth though, so other human users don't bother with the platform.