Inari

joined 8 months ago
[–] Inari@furry.engineer 2 points 8 months ago* (last edited 8 months ago)

@crashdoom I think automated moderation tools are potentially problematic unless they can be made to take into account the cultural norms of the person speaking. For example, there's a traditional British food the name of which is a homophobic slur in American English. Another British slang term for a cigarette also falls foul of this. People who like this food have been auto-banned from other platforms for posting about it with no homophobic intent. I just don't think intelligent mod tools are sufficiently capable to pick up from the speaker's other posts or profile that what they said isn't prejudiced because of who said it or that someone using the n-word is black and therefore the standards for whether the speech should trigger discipline could be radically different than if a white person said it.

If auto moderation is introduced, I think it's important that the bot should message anyone it targets, tell them what its grounds were and how to appeal to a human if they think it was wrong