this post was submitted on 09 Jul 2023
18 points (95.0% liked)

Fediverse

17536 readers
2 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 4 years ago
MODERATORS
 

The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I'm sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

all 32 comments
sorted by: hot top controversial new old
[–] PetrichorBias@lemmy.one 11 points 1 year ago* (last edited 1 year ago) (3 children)

This was a problem on reddit too. Anyone could create accounts - heck, I had 8 accounts:

one main, one alt, one "professional" (linked publicly on my website), and five for my bots (whose accounts were optimistically created, but were never properly run). I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

I feel like this is what happened when you'd see posts with hundreds / thousands of upvotes but had only 20-ish comments.

There needs to be a better way to solve this, but I'm unsure if we truly can solve this. Botnets are a problem across all social media (my undergrad thesis many years ago was detecting botnets on Reddit using Graph Neural Networks).

Fwiw, I have only one Lemmy account.

[–] impulse@lemmy.world 2 points 1 year ago

I see what you mean, but there's also a large number of lurkers, who will only vote but never comment.

I don't think it's unfeasible to have a small number of comments on a highly upvoted post.

[–] AndrewZabar@beehaw.org 1 points 1 year ago

On Reddit there were literally bot armies by which thousands of votes could be instantly implemented. It will become a problem if votes have any actual effect.

It’s fine if they’re only there as an indicator, but if the votes are what determine popularity, prioritize visibility, it will become a total shitshow at some point. And it will be rapid. So yeah, better to have a defense system in place asap.

[–] simple@lemmy.world 0 points 1 year ago (1 children)

Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don't know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.

[–] cynar@lemmy.world -1 points 1 year ago (1 children)

An automated trust rating will be critical for Lemmy, longer term. It's the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance 'vouches' for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.

[–] fmstrat@lemmy.nowsci.com -1 points 1 year ago

This will be very difficult. With Lemmy being open source (which is good), bot maker's can just avoid the pitfalls they see in the system (which is bad).

[–] mintyfrog@lemmy.ml 2 points 1 year ago

PSA: internet votes are based on a biased sample of users of that site and bots

[–] 7heo@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (2 children)
[–] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago* (last edited 9 months ago)

[This comment has been deleted by an automated system]

[–] SQL_InjectMe@partizle.com 0 points 1 year ago (1 children)

Small instances are cheap, so we need a way to prevent 100 bot instances running on the same server from gaming this too

[–] 7heo@lemmy.ml 1 points 1 year ago* (last edited 1 year ago)
[–] krnl386@lemmy.ca 1 points 1 year ago

Did anyone ever claim that the Fediverse is somehow a solution for the bot/fake vote or even brigading problem?

[–] Hannah789@lemmy.my.id 1 points 11 months ago* (last edited 11 months ago)

This blog post is fantastic! It's packed with valuable insights and actionable advice. Thanks for sharing such an informative and well-written article. buy Linkedin Connections

[–] Andreas@feddit.dk 1 points 1 year ago

Federated actions are never truly private, including votes. While it's inevitable that some people will abuse the vote viewing function to harass people who downvoted them, public votes are useful to identify bot swarms manipulating discussions.

[–] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago* (last edited 9 months ago)

[This comment has been deleted by an automated system]

[–] stevedidWHAT@lemmy.world 1 points 1 year ago

You mean to tell me that copying the exact same system that Reddit was using and couldn’t keep bots out of is still vuln to bots? Wild

Until we find a smarter way or at least a different way to rank/filter content, we’re going to be stuck in this same boat.

Who’s to say I don’t create a community of real people who are devoted to manipulating votes? What’s the difference?

The issue at hand is the post ranking system/karma itself. But we’re prolly gonna be focusing on infosec going forward given what just happened

[–] milicent_bystandr@lemmy.ml 0 points 1 year ago (1 children)

I wonder if it's possible ...and not overly undesirable... to have your instance essentially put an import tax on other instances' votes. On the one hand, it's a dangerous direction for a free and equal internet; but on the other, it's a way of allowing access to dubious communities/instances, without giving them the power to overwhelm your users' feeds. Essentially, the user gets the content of the fediverse, primarily curated by the community of their own instance.

[–] deadsuperhero@lemmy.ml 0 points 1 year ago (1 children)

Honestly, thank you for demonstrating a clear limitation of how things currently work. Lemmy (and Kbin) probably should look into internal rate limiting on posts to avoid this.

I'm a bit naive on the subject, but perhaps there's a way to detect "over x amount of votes from over x amount of users from this instance"? and basically invalidate them?

[–] jochem@lemmy.ml 0 points 1 year ago (1 children)

How do you differentiate between a small instance where 10 votes would already be suspicious vs a large instance such as lemmy.world, where 10 would be normal?

I don't think instances publish how many users they have and it's not reliable anyway, since you can easily fudge those numbers.

[–] deadsuperhero@lemmy.ml -1 points 1 year ago (1 children)

10 votes within a minute of each other is probably normal. 10 votes all at once, or microseconds of each other, is statistically less likely to happen.

I won't pretend to be an expert on the subject, but it seems like it's mathematically possible to set some kind of threshold? If a set percent of users from an instance are all interacting microseconds from each other on one post locally, that ought to trigger a flag.

Not all instances advertise their user counts accurately, but they're nevertheless reflected through a NodeInfo endpoint.

[–] CybranM@feddit.nu 1 points 1 year ago

Surely the bot server can just set up a random delay between upvotes to circumvent that sort of detection

[–] menturi@lemmy.ml 0 points 1 year ago (1 children)

I wonder if an instance could only allow votes by users who are part of instances that require email verification or some other verification method. I would imagine that would heavily help reduce vote manipulation on that particular instance.

[–] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago* (last edited 9 months ago)

[This comment has been deleted by an automated system]

[–] popemichael@lemmy.world 0 points 1 year ago (1 children)

You can buy 700 votes anonymously on reddit for really cheap

I don't see that it's a big deal, really. It's the same as it ever was.

[–] Valmond@lemmy.ml 1 points 1 year ago

Over a houndred dollars for 700 upvotes O_o

I wouldn't exactly call that cheap 🤑

On the other hand, ten or twenty quick downvotes on an early answer could swing things I guess ...

[–] Wander@yiffit.net 0 points 1 year ago (1 children)

In case anyone's wondering this is what we instance admins can see in the database. In this case it's an obvious example, but this can be used to detect patterns of vote manipulation.

[–] toish@yiffit.net 1 points 1 year ago

“Shill” is a rather on-the-nose choice for a name to iterate with haha

[–] thoralf@discuss.tchncs.de -1 points 1 year ago

People may not like it but a reputation system could solve this. Yes, it's not the ultimate weapon and can surely be abused itself.

But it could help to prevent something like this.

How could it work? Well, each server could retain a reputation score for each user it knows. Every up- or downvote is then modified by this value.

This will not solve the issue entirely, but will make it less easy to abuse.

[–] bloodfart@lemmy.ml -1 points 1 year ago (1 children)

Get rid of votes. They suck.

[–] Duke_Nukem_1990@feddit.de 0 points 1 year ago (1 children)

Nah, I want to downvote Nazis. Their opinions don't matter and should be suppressed.

[–] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago* (last edited 9 months ago)

[This comment has been deleted by an automated system]