this post was submitted on 24 Jul 2023
816 points (85.4% liked)
Showerthoughts
30044 readers
777 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. A showerthought should offer a unique perspective on an ordinary part of life.
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- Avoid politics
- 3.1) NEW RULE as of 5 Nov 2024, trying it out
- 3.2) Political posts often end up being circle jerks (not offering unique perspective) or enflaming (too much work for mods).
- 3.3) Try c/politicaldiscussion, volunteer as a mod here, or start your own community.
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Can someone explain to me why everyone on this site thinks that everything bad about other social media sites is somehow being forced upon the users to enslave them to "the algorithm"? It's like socialist Qanon.
I don't know exactly what angle you're looking to clarify in that regard, but to ELI5 it:
There are two factors: targeted ads and algorithm manipulation.
Mainstream social media sites earn money from ads they deliver. The more people stay on the site and view posts, the more ads they see. The algorithm is designed to promote content that users are likelier to view, not necessarily content that they would like more. In practice, this tends to be content that provides some sort of shock value. That combination of targeted ads with clickbait creates "doomscrolling".
Longer explanation below:
The value that social media sites give to advertisers is that they know everything about their users. They collect data based on posts and viewing habits to learn things like income, hobbies, location, sexual orientation, political affiliation, etc. When advertisers buy ads to show on social media sites, they get to target these ads at specific people that they are likely to leave the biggest impact on.
But what happens if you want to increase the visibility of your (not ad) content on social media? A lot of companies use social media to bring people to their own sites/channels where they make money. In some cases, they can pay to be promoted, giving them an advantage in the algorithm. In other cases, they can manipulate the algorithm using clickbait (to engage users using the doomscrolling trend) or even using bots to give a false sense of engagement.
In recent major elections/referendums, there were a lot of ads and promoted content intended to sway opinions. People would intentionally be shown content to upset them, increasing doomscrolling and increasing their chances of getting out to vote against these things. However, in many cases, the content that people would see would be half-truths or outright lies. Because they were earning money, social media sites did not care about verifying the content of the ads they were showing.
It's been proven that Brexit, for example, was decided by voters who were manipulated via targeted ads and clickbait delivered by social media to believe falsehoods that swayed their vote. And in many cases, these lies weren't just spread by specific political campaigns, but actually by external governmental entities who had a vested interest in the outcome. Namely Russia, who had a lot to gain from a weaker EU.
Lemmy is not immune to doomscrolling and bot manipulation, but it doesn't have ads and, that we know of, does not sell user data. It's harder to be targeted here because the only thing people can do is try to game the vote system to make their content more visible (which is sadly easier than it should be). But all you have access to are people subscribed to specific communities or registered on specific instances. It's harder to target people en masse and you only have a single data point to target, namely people who like [community topic].
Sooooo, there’s a lot of truth to it.
Once a site is big enough that they want to cash in on it, they develop tools and ai and make choices that are designed to keep you on the site longer.
These tools and ai quickly discover that the way to keep you engaged is to keep you enraged. Content that angers you will keep your engagement longer and keep you coming back.
This is well researched and I’ll cite sources if you need it.
So what happens is that the ai, while it isn’t designed explicitly to show right wing content, will end up learning that showing that content accomplished it’s actual goal. It’s original goal being “Keep people on the site longer”
Right wing content fits a nice niche where it engages a lot of people. Donald trump claiming that he lost the election will enraged the right because they believe in his horse shit and that the election was stolen, and the left gets enraged by it because it causes unnecessary violence like Jan 6th. The AI loves that because it’s fairly universally enraging, and engaging most people.
To build upon this, just getting into a petty online argument about nothing keeps users coming back. I enjoy reading the back and forth between two strangers
Yep!
I don't understand why you got downvoted. Openly discussing exactly how you're going to trespass on government property and hog tie prominent politicians would raise a lot of eyebrows quickly. The multiple coordinated attacks, stashes of firearms and ammunition, the bombs that were placed, and the scheduled armed reinforcements that we now know about through the various court cases all happened in protected echo chambers and private chats, not on the open web
There is no truth to it. The vast majority of negative interactions and aberations on a social media site is brought about by the users, not by the operators of the site. These tools they have are not as powerful as you think they are. The only reason they have any power at all is because the users give them that power because that is what they want. You don't have a successful site by manipulating the user base to do what you want them to do, they will just leave. You simply give them what they want and they never leave. "The algorithm" is there to give the user what they want, and they're actually really bad at doing that.
The users create the content, the background ai decides which content to prioritize and promote to the front page, etc..
Which part of that is wrong?
The fact that the user is the one imputing the data to determine the received content in some way. You're selecting the content you interact with, not a black box trying to take over the population. They just want you to stay on the site, look at the ads, and never leave. They don't care about your political allegiance or what movies you like, they will feed you whatever you want.
Agreed!!!
The user selects the content that they interact with, but because content that upsets you is so engaging, the AI will heartily promote it.
look at how engaged you are with these comments! Is it because they make you upset?
How interesting. ;-)
I only really ever comment when I have something to say. This usually is only when I disagree with something.
That’s why my upvote ratio is terrible. I rarely comment when I agree with something someone has said. I bet my ratio would be a lot better if I did.
But that’s just human nature, I think. Some people crave acceptance and validation so they comment agreement and some people crave conflict and challenge, so they comment in disagreement.
Everyone is the hero of their own story, so I think they feel the need to “correct” perceived injustices.
I think your experience is common.
And I think AI exploits this, because it’s useful.
I agree 100%
You're attributing combative interaction to an algorithm on a site that has no algorithm. Congratulations you just proved the algorithm isnt needed to cause interaction. People do this with no computer forcing them to, but tons of people here are convinced that every other site is filled with bots manipulating content for people when the people are asking for the content, sometimes very directly.
I’m attributing combative interactions to “keeping your attention”
The ai just exploits this.
So while it’s not NEEDED, it does happen and it works.
Maybe your point is better worded as “the AI doesn’t overrule your own ability to choose”
Which while true, doesn’t change my point. Combative interactions happen without ai, the ai just learns and promotes them.
So you understand the system very well, yet completely ignore the ethically dubious aspects of the system.
People are not born desiring harmful garbage. They are, at least in part, taught, conditioned to desire it.
When you say that a site "feeds you whatever you want", you're ignoring the chicken-or-the-egg pattern of desire and satisfaction on the market. The site teaches you want you want. Internet addiction and the ways in which contemporary media and tech affect your mind (most obviously by reducing people's attention spans) are fairly well known today.
Imagine a drug dealer who sells his garbage to the same person so much that they develop an addiction. With your logic, we can just blame the junkie who keeps returning to the dealer, while the dealer is pretty much innocent - surely it's not his responsibility if someone else develops an addiction and destroys their life!
Purely anecdotal, but I have two Facebook profiles. I'm extremely left leaning, especially in the fake one, yet both have their feeds blowing up with articles from conservative pages and groups about this "small town" song, Donald Trump, and Ron DeSantis. Oh, and Fox News articles too, up until I hid them.
I don't engage with any of those communities or anything even tangentially related to them. I have discussed all of those concepts in groups lately, though.
Is manipulation a force?
No one forces you to engage in arguments on Reddit or Twitter. You have autonomy over who you interact with on both sites. You're not being forced or manipulated to do anything. If you engage in this these things people perceive as negative, it's because you choce to do it of your own free will.
No, it's because of your scrolling speed, pauses, engagements, updoots, downdoots. Everything you do is taken into consideration to update your feed with more stuff that you are likely to engage with. That's all.
And you're the one doing it all, not a computer. the computer is not that smart, you need to tell it what you want to see.
You're arguing that there's no algorithm that promotes content users interact with on Facebook, Instagram, Reddit, and so on? This has been proven repeatedly.
On Facebook, most people seem to think they get the feed in time-ordered manner, and that hasn't been true for a decade or more. For example, posts with pictures get promoted to be closer to the top of the feed. Crucially, posts with more interaction (replies, reactions, probably even reports) are also shown closer to the top of the feed, so the user is much more likely to see those when they load Facebook.
So, things that upset people will get a lot of interaction, and they show those at the top of the feeds, which generates even more interaction. So then the algorithms start looking for similar content that will generate the same type of interactions, to put that near the top of the feed, and next thing you know millions of people are worshiping the ground some idiotic politician walks on.
Sure, it's free will to respond, but the fact is that the users' feed is being curated, focused on whatever extreme thing generates reactions.