this post was submitted on 28 Mar 2025
135 points (92.5% liked)

Ask Lemmy

30526 readers
1790 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I have noticed that lemmy so far does not have a lot of fake accounts from bots and AI slop at least from what I can tell. I am wondering how the heck do we keep this community free of that kind of stuff as continuous waves of redditors land here and the platform grows.

EDIT a potential solution:

I have an idea where people can flag a post or a user as a bot and if it's found out to be a bot the moderators could have some tool where the bot is essentially shadow banned into an inbox that just gets dumped occasionally. I am thinking this because then people creating the bots might not realize their bot has been banned and try and create replacement bots. This could effectively reduce the amount of bots without bot creators realizing it or know if their bots have been blocked or not. The one thing that would also be needed is a way to request being un-bannned if they get hit as a false positive. these would have to be built into lemmy's moderation tools and I don't know if any of that exists currently.

you are viewing a single comment's thread
view the rest of the comments
[–] ptz@dubvee.org 61 points 3 days ago* (last edited 3 days ago) (11 children)

My instance has "Rule 3: No AI Slop. This is a platform for humans to interact" and it's enforced pretty vigorously.

As far as "how":

  1. Sometimes it's obvious. In those cases, the posts are removed and the account behind it investigated. If the account has a pattern of it, they get a one way ticket to Ban City

  2. Sometimes they're not obvious, but the account owner will slip up and admit to it in another post. Found a handful that way, and you guessed it, straight to Ban City.

  3. Sometimes t's difficult on an individual post level unless there are telltale signs. Typically have to look for patterns in different posts by the same account and account for writing styles. This is more difficult / time consuming, but I've caught a few this way (and let some slide that were likely AI generated but not close enough to the threshold to ban).

  4. I hate the consumer AI crap (it has its place, but in every consumer product is not one of them), but sometimes if I'm desperate, I'll try to get one of them to generate a similar post as one I'm evaluating. If it comes back very close, I'll assume the post I'm evaluating was AI-generated and remove it while looking at other content by that user, changing their account status to Nina Ban Horn if appropriate.

  5. If an account has a high frequency of posts that seems unorganic, the Eye of Sauron will be upon them.

  6. User reports are extremely helpful as well

  7. I've even banned accounts that post legit news articles but use AI to summarize the article in the post body; that violates rule 3 (no AI slop) and Rule 6 (misinformation) since AI has no place near the news.

If you haven't noticed, this process is quite tedious and absolutely cannot scale under a small team. My suggestion: if something seems AI generated, do the legwork yourself (as described above) and report them; be as descriptive in the report as possible to save the mod/admin quite a bit of work.

[–] SorteKanin@feddit.dk 1 points 2 days ago (1 children)

Sometimes t’s difficult on an individual post level unless there are telltale signs. Typically have to look for patterns in different posts by the same account and account for writing styles.

The problem is that this is only going to get harder. First of all, AI is going to get better and be able to produce more natural sounding stuff.

But also, people will inevitably get affected by AI as well and people will drift towards sounding more like AI too. So both AI and humans will converge on each other and they'll likely be impossible to tell apart in general in not too many years.

I'm not sure how we solve this tbh.

But also, people will inevitably get affected by AI as well and people will drift towards sounding more like AI too.

The “AI checkers” that schools/unis use has found a strong correlation between neurodiversity and sounding like AI. Basically, AI sounds autistic, so autistic people get flagged as AI.

load more comments (9 replies)