this post was submitted on 20 Aug 2023
33 points (94.6% liked)

lemmy.ml meta

1469 readers
9 users here now

Anything about the lemmy.ml instance and its moderation.

For discussion about the Lemmy software project, go to !lemmy@lemmy.ml.

founded 3 years ago
MODERATORS
 

Some context about this here: https://arstechnica.com/information-technology/2023/08/openai-details-how-to-keep-chatgpt-from-gobbling-up-website-data/

the robots.txt would be updated with this entry

User-agent: GPTBot
Disallow: /

Obviously this is meaningless against non-openai scrapers or anyone who just doesn't give a shit.

you are viewing a single comment's thread
view the rest of the comments
[–] Hubi@feddit.de 2 points 2 years ago (2 children)

Wouldn't they theoretically be able to set up their own instance, federate with all the larger ones and scrape the data this way? Not sure if blocking them via the robots.txt file is the most effective barrier in case that they really want the data.

[–] dreadedsemi@lemmy.world 11 points 2 years ago* (last edited 2 years ago)

Robots.txt is more of an honor system. If they respect , they won't do that trick.

[–] NightAuthor@beehaw.org 5 points 2 years ago

Robots.txt is just a notice anyways. Your scraper could just ignore it, no workaround necessary.