this post was submitted on 24 Mar 2025
36 points (86.0% liked)

Technology

67987 readers
3587 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] SnotFlickerman@lemmy.blahaj.zone 19 points 6 days ago* (last edited 6 days ago)

I mean, the article is pretty long, but it's pretty simple:

  • Don't use AI of any sort to be a source for an answer to your question.

  • Do use Wikipedia and check the sources referenced.

  • If not on Wikipedia, check a trusted source with a relatively long publishing history and known ownership. (this doesn't mean only stuff like The New York Times... Boing Boing for example has been around for a long, long time)

  • Use archive.org's Wayback Machine to get access to older articles, when necessary.

LLMs literally have to frame of reference to real life except the text they've ingested and they have no way to know which text is true or not. To an LLM, Alex Jones is just as valid a source as Mother Jones.

[–] supersquirrel@sopuli.xyz 4 points 6 days ago* (last edited 6 days ago) (1 children)

I really have very little tolerance for people on the continuum of techbro to naive-libertarian that try to invent lots of technical hypothetical solutions to this that all either boil down to systems of centralized control, or wildly unrealistic systems that will never take off the ground like fantastic depictions of flying machines....

Lets get straight to the point, the hard problem here is that philosophically there really is no shortcut to tell AI slop from genuine real information, there also can fundamentally be no logical operation you can perform that can seperate the "real" from "bot spam" because you fail at the first step of defining "real" especially if you are a techbro or libertarian fool who has never thought through the implications of any of this (see the shitshow that is the social media hellhole gab).

I think a lot of people I am tempted to refer to as "centrist", though that is a problematic generalization it is of course more complicated, want to believe we just need more authoritarianism and advanced technology to solve this problem, and it is ultimately a dangerous fantasy.

At a philosophical level, which let me remind everyone, is the level you need to talk at before you ever bother thinking about technical implementations and advanced AI fact checkers blah blah blah..... the only thing we can really do is design spaces that make it most likely for the human parts of real information to shine through in a way that makes it apparent that it is unlikely that information was generated by a bot or by a nefarious actor.

This is a game of probabilities, like trying to guess someone's intentions or understand what they are feeling, we might get very very very good at doing so but ultimately there is always a significant likelihood that we are wrong either because of a lack of context or just because that is how things go with unpredictable chaotic things...

So then how do we design spaces so that they let the authenticity of "real" things shine through? I would argue the answer is genuine, spontaneous conversation and interaction in public or semi-public shared spaces. Forums, lemmy/reddit-likes and other forms of public discussion create conversations and as human beings we are INCREDIBLY good at observing interactions between strangers and deducing if those interactions feel genuine or not.

We can often be wrong about it, but anybody that has done theater for any amount of time, or really done any kind of art for an audience, knows that though the audience may not be able to put into words why something feels inauthentic the moment it becomes so they notice. That is why art performed to an audience is so endlessly compelling and why you can spend a lifetime learning from it.

What we can hope to do, and will always at some level fail at because we can never be ideal, is to help build the "real" collectively through public conversation, disagreement, explanation and sharing of sources of information.

[–] SnotFlickerman@lemmy.blahaj.zone 3 points 6 days ago* (last edited 6 days ago) (1 children)

knows that though the audience may not be able to put into words why something feels inauthentic the moment it becomes so they notice.

This is also why forum disruptors do things like forum sliding, consensus cracking, topic dilution and so on to distract from the fact that their positions are ingenuine.

https://en.wikipedia.org/wiki/Joint_Threat_Research_Intelligence_Group

LLMs are not good at these techniques, though, it still takes a human touch for it to work.

I think the biggest difference is that it has just streamlined and made Persona Management Software more effective.

[–] supersquirrel@sopuli.xyz 3 points 6 days ago* (last edited 6 days ago) (1 children)

Of course, this is also something artists who create and performs art for an audience understands, as much as expression can be about vulnerability it can also be about lying, twisting the truth and getting people to agree to terrible things because the way you deliver it is charismatic or distracts sufficiently from the material reality.

All of this to say, yes, manipulation is still rampant, but we are talking in an a semi-abstracted space, manipulation will always be possible it is simply a question of how obvious it is to authentic users and/or how costly it is to create sufficiently convincing astroturfing bots and consensus.

Your response doesn't invalidate my point (I don't mean to assume it does or take a negative tone), my point is that public conversation where people explain their arguments and give sources and people can respond and critique each other with no centralized authority necessarily determining the existential ability of users to participate in the conversation (i.e. you get banned from reddit for mentioning luigi) is the best solution we have. This is the fundamentally difficult problem of social media, all the programming, scaling up of server architecture, design of protocols, UIs and coding of clients, apps etc.... that is the easy stuff honestly.

[–] SnotFlickerman@lemmy.blahaj.zone 3 points 6 days ago* (last edited 6 days ago) (1 children)

I was actually agreeing with your point, just to be clear.

it can also be about lying, twisting the truth and getting people to agree to terrible things because the way you deliver it is charismatic or distracts sufficiently from the material reality.

This Norm Macdonald bit is a great example of this type of crowd work.

[–] supersquirrel@sopuli.xyz 3 points 6 days ago* (last edited 6 days ago) (1 children)

Yeah sorry I am tired and I realized after re-reading your response and my response that I misread you and should have taken more time to think my response over I apologize.

Thank you for the thoughtful response, I am sorry I didn't give it the space in my mind it deserved!

edit would it honor you if I flicked some snot at you in solidarity?

No problem at all, it happens to us all sometimes!

[–] benelbow@lemm.ee 4 points 6 days ago

Well worth a read. Interesting, pithy & informative. Thanks.

[–] Valmond@lemmy.world 2 points 6 days ago