this post was submitted on 26 Jun 2025
145 points (98.0% liked)

Technology

71932 readers
4225 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Link without the paywall

https://archive.ph/OgKUM

you are viewing a single comment's thread
view the rest of the comments
[–] Natanael@infosec.pub 1 points 4 hours ago

This case didn't cover the copyright status of outputs. The ruling so far is just about the process of training itself.

IMHO the generative ML companies should be required to build a process tracking the influence of distinct samples on the outputs, and inform users of potential licensing status

Division of liability / licensing responsibility should depend on who contributes what to the prompt / generation. The less it takes for the user to trigger the model to generate an output clearly derived from a protected work, the more liability lies on the model operator. If the user couldn't have known, they shouldn't be liable. If the user deliberately used jailbreaks, etc, the user is clearly liable.

But you get a weird edge case when users unknowingly copy prompts containing jailbreaks, though

https://infosec.pub/comment/16682120