this post was submitted on 28 Aug 2023
477 points (96.1% liked)

General Discussion

12815 readers
14 users here now

Welcome to Lemmy.World General!

This is a community for general discussion where you can get your bearings in the fediverse. Discuss topics & ask questions that don't seem to fit in any other community, or don't have an active community yet.


πŸͺ† About Lemmy World


🧭 Finding CommunitiesFeel free to ask here or over in: !lemmy411@lemmy.ca!

Also keep an eye on:

For more involved tools to find communities to join: check out Lemmyverse!


πŸ’¬ Additional Discussion Focused Communities:


Rules and Policies

Remember, Lemmy World rules also apply here.0. See: Rules for Users.

  1. No bigotry: including racism, sexism, homophobia, transphobia, or xenophobia.
  2. Be respectful. Everyone should feel welcome here.
  3. Be thoughtful and helpful: even with β€˜silly’ questions. The world won’t be made better by dismissive comments to others on Lemmy.
  4. Link posts should include some context/opinion in the body text when the title is unaltered, or be titled to encourage discussion.
  5. Posts concerning other instances' activity/decisions are better suited to !fediverse@lemmy.world or !lemmydrama@lemmy.world communities.
  6. No Ads/Spamming.
  7. No NSFW content.

founded 2 years ago
MODERATORS
 

Yeah so, someone just straight up posted child porn on Lemmy Shitposts. Jesus christ this is getting out of hand.

Sick fucking bastards

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Blaze@discuss.tchncs.de 13 points 2 years ago

Definitely, have a look at !fediverse@lemmy.ml , one instance admin decided to shut down their instance due to this

[–] whataboutshutup@discuss.online 12 points 2 years ago (1 children)

Is there a way for admin teams and self-hosters to have a shared banlist, IP\instances included? Like a sideloading addition to a local filter, with a limited amount of collectively approved contributors commiting to it directly? I don't know how it should work, but it may possibly reduce their attacks' effectiveness.

[–] AnyOldName3@lemmy.world 4 points 2 years ago

This is how it tends to work for smaller mastodon instances, so I'd be unsurprised if it's either possible or at least coming soon.

[–] Kahlenar@lemmy.world 12 points 2 years ago (1 children)

People won't stop posting pics of trump either, idc if you guys think we're making fun of him his face is still on everything in here.

load more comments (1 replies)
[–] ieightpi@lemmy.world 10 points 2 years ago* (last edited 2 years ago) (1 children)

Do general users have to worry about backlash from this type of stuff? I still dont fully understand how federated content is passed along to different instances. What does a normal persons IP history show in regards to what is connected to it?

Like I never saw the illegal content on my feed but I have an account on the instance that had the content posted.

What kind of history do system administrators see from someones IP in these circumstances? Can a person be fired, or jailed just by having an account associated with the illegal shit on the instance?

I hope my question makes sense.

[–] Solarius@lemmy.sdf.org 8 points 2 years ago (2 children)

Getting charged for possession of CP isn't like this Voldemort thing that you can't even be within the vicinity of. With illegal internet material it's already pretty "safe" in terms of prosecution. If you could be found guilty of content you didn't even interact with they'd have to change some laws or have way more people in prison. You have to show intent in seeking it out and unless you're a huge target (political or someone who produces/spreads the content) you'll be fine.

[–] atticus88th@lemmy.world 3 points 2 years ago (1 children)

I was banned from Reddit for upvoting a Ana de Armas gif which according to them was child porn. I appealed and got my account back but wtf that same gif is posted now all the time in multiple subs and is never removed now.

load more comments (1 replies)
[–] ArchmageAzor@lemmy.world 7 points 2 years ago (5 children)

I wonder if a bot using AI image recognition would be feasible. Train it on CP and similar awful stuff and have it auto-flag posts that fit the bill for moderator removal. The problem would be sourcing the training material and finding people willing to expose themselves to what it flags.

[–] meldroc@lemmy.world 5 points 2 years ago (1 children)

The best thing to do would be to train it at first by having it trained on live posts as human moderators flag CSAM, then once it's trained up, it can start auto-flagging posts, with human mods checking. Don't keep the CSAM material, just train the neural net, then delete.

This should be doable without storing CSAM for any longer than it takes to catch it and remove it.

load more comments (1 replies)
[–] OsrsNeedsF2P@lemmy.ml 4 points 2 years ago (1 children)

Honestly just auto-remove it and let users appeal

load more comments (1 replies)
[–] DarkWasp@lemmy.world 3 points 2 years ago* (last edited 2 years ago)

Something this already exists and is used by google and law enforcement agencies.

load more comments (2 replies)
load more comments
view more: β€Ή prev next β€Ί