Definitely, have a look at !fediverse@lemmy.ml , one instance admin decided to shut down their instance due to this
General Discussion
Welcome to Lemmy.World General!
This is a community for general discussion where you can get your bearings in the fediverse. Discuss topics & ask questions that don't seem to fit in any other community, or don't have an active community yet.
πͺ About Lemmy World
π§ Finding Communities
Feel free to ask here or over in: !lemmy411@lemmy.ca!
Also keep an eye on:
- !newcommunities@lemmy.world
- !communitypromo@lemmy.ca
- !new_communities@mander.xyz
- !communityspotlight@lemmy.world
- !wowthislemmyexists@lemmy.ca!
For more involved tools to find communities to join: check out Lemmyverse!
π¬ Additional Discussion Focused Communities:
- !actual_discussion@lemmy.ca - Note this is for more serious discussions.
- !casualconversation@lemm.ee - The opposite of the above, for more laidback chat!
- !letstalkaboutgames@feddit.uk - Into video games? Here's a place to discuss them!
- !movies@lemm.ee - Watched a movie and wanna talk to others about it? Here's a place to do so!
- !politicaldiscussion@lemmy.world - Want to talk politics apart from political news? Here's a community for that!
Rules and Policies
Remember, Lemmy World rules also apply here.
0. See: Rules for Users.
- No bigotry: including racism, sexism, homophobia, transphobia, or xenophobia.
- Be respectful. Everyone should feel welcome here.
- Be thoughtful and helpful: even with βsillyβ questions. The world wonβt be made better by dismissive comments to others on Lemmy.
- Link posts should include some context/opinion in the body text when the title is unaltered, or be titled to encourage discussion.
- Posts concerning other instances' activity/decisions are better suited to !fediverse@lemmy.world or !lemmydrama@lemmy.world communities.
- No Ads/Spamming.
- No NSFW content.
Is there a way for admin teams and self-hosters to have a shared banlist, IP\instances included? Like a sideloading addition to a local filter, with a limited amount of collectively approved contributors commiting to it directly? I don't know how it should work, but it may possibly reduce their attacks' effectiveness.
This is how it tends to work for smaller mastodon instances, so I'd be unsurprised if it's either possible or at least coming soon.
People won't stop posting pics of trump either, idc if you guys think we're making fun of him his face is still on everything in here.
Do general users have to worry about backlash from this type of stuff? I still dont fully understand how federated content is passed along to different instances. What does a normal persons IP history show in regards to what is connected to it?
Like I never saw the illegal content on my feed but I have an account on the instance that had the content posted.
What kind of history do system administrators see from someones IP in these circumstances? Can a person be fired, or jailed just by having an account associated with the illegal shit on the instance?
I hope my question makes sense.
Getting charged for possession of CP isn't like this Voldemort thing that you can't even be within the vicinity of. With illegal internet material it's already pretty "safe" in terms of prosecution. If you could be found guilty of content you didn't even interact with they'd have to change some laws or have way more people in prison. You have to show intent in seeking it out and unless you're a huge target (political or someone who produces/spreads the content) you'll be fine.
I was banned from Reddit for upvoting a Ana de Armas gif which according to them was child porn. I appealed and got my account back but wtf that same gif is posted now all the time in multiple subs and is never removed now.
I wonder if a bot using AI image recognition would be feasible. Train it on CP and similar awful stuff and have it auto-flag posts that fit the bill for moderator removal. The problem would be sourcing the training material and finding people willing to expose themselves to what it flags.
The best thing to do would be to train it at first by having it trained on live posts as human moderators flag CSAM, then once it's trained up, it can start auto-flagging posts, with human mods checking. Don't keep the CSAM material, just train the neural net, then delete.
This should be doable without storing CSAM for any longer than it takes to catch it and remove it.
Something this already exists and is used by google and law enforcement agencies.