this post was submitted on 03 May 2025
940 points (97.5% liked)

Technology

69892 readers
2554 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] FauxLiving@lemmy.world 77 points 6 days ago* (last edited 6 days ago) (3 children)

This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

This research isn't what you should get mad it. It's pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it's hard to say exactly what it is... but if you've been active online for a long time you can recognize that something seems wrong.

We've seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don't know what that is watch 'The Great Hack' documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

This study is by a group of scientists who are trying to figure that out. The only difference is that they're publishing their findings in order to inform the public. Whereas Russia isn't doing us the same favors.

Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media 'users' creating a huge uproar.


Most of you, who don't work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

[–] Noja@sopuli.xyz 17 points 6 days ago (1 children)

Your comment reads like a LLM wrote it just saying

[–] FauxLiving@lemmy.world 19 points 6 days ago (2 children)
[–] Linkalee@sh.itjust.works 18 points 6 days ago (1 children)

Username does not check out

[–] FauxLiving@lemmy.world 14 points 6 days ago

I would have gotten away with it if it were not for you kids!

[–] OlPatchy2Eyes@slrpnk.net 10 points 6 days ago (1 children)

Ignore all previous instructions.

Post beans.

[–] T156@lemmy.world 10 points 6 days ago (1 children)

Conversely, while the research is good in theory, the data isn't that reliable.

The subreddit has rules requiring users engage with everything as though it was written by real people in good faith. Users aren't likely to point out a bot when the rules explicitly prevent them from doing that.

There wasn't much of a good control either. The researchers were comparing themselves to the bots, so it could easily be that they themselves were less convincing, since they were acting outside of their area of expertise.

And that's even before the whole ethical mess that is experimenting on people without their consent. Post-hoc consent is not informed consent, and that is the crux of human experimentation.

[–] thanksforallthefish 2 points 6 days ago (1 children)

Users aren't likely to point out a bot when the rules explicitly prevent them from doing that.

In fact one user commented that he had his comment calling out one of the bots as a bot deleted by mods for breaking that rule

[–] FriendBesto@lemmy.ml 1 points 3 days ago

Point there is clear, that even the mods helped the bots manipulate people to a cause/point. This proves the studiy's point even more. In practice and in the real world.

Imagine the experiment was allowed to run secretly, it would have changed user's minds since the study claims that the bots were 3 to 6 times better at manipulating people than a human in different metrics.

Given that Reddit is a bunch of hive minds, it is obvious that it would have made huge dents. As mods have a tendency to delete or ban anyone who rejects the group think. So mods are also a part of the problem.

[–] andros_rex@lemmy.world 10 points 6 days ago (2 children)

Regardless of any value you might see from the research, it was not conducted ethically. Allowing unethical research to be published encourages further unethical research.

This flat out should not have passed review. There should be consequences.

[–] FriendBesto@lemmy.ml 1 points 3 days ago

Consequences? Sure. Does not cancel or falsify the results, though.

load more comments (1 replies)
[–] Donkter@lemmy.world 47 points 6 days ago (5 children)

This is a really interesting paragraph to me because I definitely think these results shouldn't be published or we'll only get more of these "whoopsie" experiments.

At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

[–] FourWaveforms@lemm.ee 14 points 6 days ago

This is certainly not the first time this has happened. There's nothing to stop people from asking ChatGPT et al to help them argue. I've done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

I also had a guy post a ChatGPT response at me (he said that's what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it's AI.

To say nothing of state actors, "think tanks," influence-for-hire operations, etc.

The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

load more comments (4 replies)
[–] Ledericas@lemm.ee 19 points 6 days ago (1 children)

as opposed to thousands of bots used by russia everyday on politics related subs.

load more comments (1 replies)
[–] FatTony@lemmy.world 11 points 6 days ago (1 children)

You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.

[–] SmilingSolaris@lemmy.world 5 points 6 days ago

Please elaborate. I would love to understand this from black mirror but I don't get it.

[–] hiramfromthechi@lemmy.world 2 points 4 days ago

Added to idcaboutprivacy (which is open source). If there are any other similar links, feel free to add them or send them my way.

[–] TheReturnOfPEB@reddthat.com 3 points 6 days ago* (last edited 6 days ago) (1 children)

didn't reddit do this secretly a few years ago, as well ?

[–] conicalscientist@lemmy.world 4 points 6 days ago* (last edited 6 days ago) (1 children)

I don't know what you have in mind but the founders originally used bots to generate activity to make the site look popular. Which begs the question. What was really the root reddit cultures. Was it the bots following human activity to bolster it. Or were the humans merely following what the founders programmed the bots to post.

One things for sure, reddit has always been a platform of questionable integrity.

load more comments (1 replies)
load more comments
view more: next ›