this post was submitted on 21 Dec 2023
270 points (97.5% liked)
Fediverse
28480 readers
737 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How much bandwidth do you suppose a crawler would use? I'd guess very little
I was thinking more in terms of resources (number of spider threads X posts/communities/users being indexed) that would be now dedicated to a bot, not so much network traffic that is probably tiny if not downloading images.
Right, it would be an initial hit but if the bot was properly built it wouldn't need to do full reindexing very often. I'm no expert but I think it could be done in a way that there is no noticeable spike in traffic or anything
That's the thing, it would need to be done in chunks and have its revisits scheduled if you want to do a complete indexing of an instance. And for a large instance that's a lot of DB thrashing if you aren't spacing that out, or just sampling like "top 10 posts" or something, but that kind of data is going to make a useless search engine depending on the goal of the search engine. If you wanted to just catalog the daily top posts of the fediverse that might work, but if you want to catalog everything it's going to take a lot of resources and a long time to make sure you're not hammering people's servers.
It will be very little if not downloading full html pages.