this post was submitted on 29 Jun 2025
85 points (89.7% liked)

Privacy

39370 readers
736 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.

Her take is very interesting: what if we could actually use AI against that?

Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.

Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?

How could this be achieved?

you are viewing a single comment's thread
view the rest of the comments
[–] Ulrich@feddit.org 2 points 1 day ago (1 children)

filtering out random false data seems trivial

As far as I know, none of them had random false data so I'm not sure why you would think that?

In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

I feel like you're greatly exaggerating the level of intelligence at work here. It's not hard to figure out people's political affiliations with something as simple as their browsing history, and it's not hard to manipulate them with propaganda accordingly. They did not have an "exact customized lie" for every individual, they just grouped individuals into categories (AKA profiling) and showed them a select few forms of disinformation accordingly.

[–] HelloRoot@lemy.lol 1 points 1 day ago* (last edited 1 day ago) (1 children)

Good input, thank you.


As far as I know, none of them had random false data so I’m not sure why you would think that?

You can use topic B as an illustration for topic A, even if topic B does not directly contain topic A. For example: (during a chess game analysis) "Moving the knight in front of the bishop is like a punch in the face from mike tyson."


There are probably better examples of more complex algorithms that work on data collected online for various goals. When developing those, a problem that naturaly comes up would be filtering out garbage. Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

[–] Ulrich@feddit.org 0 points 1 day ago

You can use topic B as an illustration for topic A

Sometimes yes. In this case, no.

Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

I think the users of such products are extremely low (especially since they've been kicked from Google store) that it wouldn't be worth their time.

But no, I don't think they could either. It's just an automation script that runs actions the same way you would.