I check if a user agent has gptbot, and if it does I 302 it to web.sp.am.
Science Memes
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- !abiogenesis@mander.xyz
- !animal-behavior@mander.xyz
- !anthropology@mander.xyz
- !arachnology@mander.xyz
- !balconygardening@slrpnk.net
- !biodiversity@mander.xyz
- !biology@mander.xyz
- !biophysics@mander.xyz
- !botany@mander.xyz
- !ecology@mander.xyz
- !entomology@mander.xyz
- !fermentation@mander.xyz
- !herpetology@mander.xyz
- !houseplants@mander.xyz
- !medicine@mander.xyz
- !microscopy@mander.xyz
- !mycology@mander.xyz
- !nudibranchs@mander.xyz
- !nutrition@mander.xyz
- !palaeoecology@mander.xyz
- !palaeontology@mander.xyz
- !photosynthesis@mander.xyz
- !plantid@mander.xyz
- !plants@mander.xyz
- !reptiles and amphibians@mander.xyz
Physical Sciences
- !astronomy@mander.xyz
- !chemistry@mander.xyz
- !earthscience@mander.xyz
- !geography@mander.xyz
- !geospatial@mander.xyz
- !nuclear@mander.xyz
- !physics@mander.xyz
- !quantum-computing@mander.xyz
- !spectroscopy@mander.xyz
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and sports-science@mander.xyz
- !gardening@mander.xyz
- !self sufficiency@mander.xyz
- !soilscience@slrpnk.net
- !terrariums@mander.xyz
- !timelapse@mander.xyz
Memes
Miscellaneous
Funny that they’re calling them AI haters when they’re specifically poisoning AI that ignores the do not enter sign. FAFO.
First Albatross, First Out
Fluffy Animal's Fecal Orifice.
When I was a kid I thought computers would be useful.
They are. Its important to remember that in a capitalist society what is useful and efficient is not the same as profitable.
AI is the "most aggressive" example of "technologies that are not done 'for us' but 'to us.'"
Well said.
Deployment of Nepenthes and also Anubis (both described as "the nuclear option") are not hate. It's self-defense against pure selfish evil, projects are being sucked dry and some like ScummVM could only freakin' survive thanks to these tools.
Those AI companies and data scrapers/broker companies shall perish, and whoever wrote this headline at arstechnica shall step on Lego each morning for the next 6 months.
Feels good to be on an instance with Anubis
one of the united Nations websites deployed Anubis
Do you have a link to a story of what happened to ScummVM? I love that project and I’d be really upset if it was lost!
I've suggested things like this before. Scrapers grab data to train their models. So feed them poison.
Things like counter factual information, distorted images / audio, mislabeled images, outright falsehoods, false quotations, booby traps (that you can test for after the fact), fake names, fake data, non sequiturs, slanderous statements about people and brands etc.. And choose esoteric subjects to amplify the damage caused to the AI.
You could even have one AI generate the garbage that another ingests and shit out some new links every night until there is an entire corpus of trash for any scraper willing to take it all in. You can then try querying AIs about some of the booby traps and see if it elicits a response - then you could even sue the company stealing content or publicly shame them.
Kind of reminds me of paper towns in map making.
It's so sad we're burning coal and oil to generate heat and electricity for dumb shit like this.
Wait till you realize this project's purpose IS to force AI to waste even more resources.
I mean, the long term goal would be to discourage ai companies from engaging in this behavior by making it useless
Some details. One of the major players doing the tar pit strategy is Cloudflare. They're a giant in networking and infrastructure, and they use AI (more traditional, nit LLMs) ubiquitously to detect bots. So it is an arms race, but one where both sides have massive incentives.
Making nonsense is indeed detectable, but that misunderstands the purpose: economics. Scraping bots are used because they're a cheap way to get training data. If you make a non zero portion of training data poisonous you'd have to spend increasingly many resources to filter it out. The better the nonsense, the harder to detect. Cloudflare is known it use small LLMs to generate the nonsense, hence requiring systems at least that complex to differentiate it.
So in short the tar pit with garbage data actually decreases the average value of scraped data for bots that ignore do not scrape instructions.
The fact the internet runs on lava lamps makes me so happy.
I suppose this will become an arms race, just like with ad-blockers and ad-blocker detection/circumvention measures.
There will be solutions for scraper-blockers/traps. Then those become more sophisticated. Then the scrapers become better again and so on.
I don't really see an end to this madness. Such a huge waste of resources.
there is an end: you legislate it out of existence. unfortunately the US politicians instead are trying to outlaw any regulations regarding AI instead. I'm sure it's not about the money.
Such a stupid title, great software!
This might explain why newer AI models are going nuts. Good jorb 👍
It absolutely doesn’t. The only model that has “gone nuts” is Grok, and that’s because of malicious code pushed specifically for the purpose of spreading propaganda.
The ars technica article: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
AI tarpit 1: Nepenthes
AI tarpit 2: Iocaine
Nice ..... I look forward to the next generation of AI counter counter measures that will make the internet an even more unbearable mess in order to funnel as much money and control to a small set of idiots that think they can become masters of the universe and own every single penny on the planet.
All the while as we roast to death because all of this will take more resources than the entire energy output of a medium sized country.
This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.
It's not like I can afford to compete with OpenAI on bandwidth, and they're burning through money with no cares already.
Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?
Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it's going to be indistinguishable from real large data sets.
Imagine the staff meeting:
You: we didn't gather any data because it was poisoned
Corposhill: we collected 120TB only from harry-potter-fantasy-club.il !!
Boss: hmm who am I going to keep...
The boss fires both, "replaces" them for AI, and tries to sell the corposhill's dataset to companies that make AIs that write generic fantasy novels
Could you imagine a world where word of mouth became the norm again? Your friends would tell you about websites, and those sites would never show on search results because crawlers get stuck.
There should be a federated system for blocking IP ranges that other server operators within a chain of trust have already identified as belonging to crawlers. A bit like fediseer.com, but possibly more decentralized.
(Here's another advantage of Markov chain maze generators like Nepenthes: Even when crawlers recognize that they have been served garbage and they delete it, one still has obtained highly reliable evidence that the requesting IPs are crawlers.)
Also, whenever one is only partially confident in a classification of an IP range as a crawler, instead of blocking it outright one can serve proof-of-works tasks (à la Anubis) with a complexity proportional to that confidence. This could also be useful in order to keep crawlers somewhat in the dark about whether they've been put on a blacklist.
Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it's still limited in bandwith they can occupy.
Wait… I just had an idea.
Make a tarpit out of subtly-reprocessed copies of classified material from Wikileaks. (And don’t host it in the US.)
--recurse-depth=3 --max-hits=256
OK but why is there a vagina in a petri dish
I was going to say something snarky and stupid, like "all traps are vagina-shaped," but then I thought about venus fly traps and bear traps and now I'm worried I've stumbled onto something I'm not supposed to know.
Typical bluesky post