this post was submitted on 29 Jun 2025
388 points (98.5% liked)

Not The Onion

17019 readers
1546 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
top 40 comments
sorted by: hot top controversial new old
[–] DarrinBrunner@lemmy.world 19 points 1 day ago

They're interjecting themselves between us and our contacts. They'll have the power to "summarize", which is also the power to subtly re-interpret meaning. The AI will be given a broad goal, and it will chip away at that goal bit by bit, across millions of summaries to mold public opinion.

[–] slaacaa@lemmy.world 50 points 1 day ago
[–] IAmNorRealTakeYourMeds@lemmy.world 24 points 1 day ago (1 children)
[–] slaacaa@lemmy.world 15 points 1 day ago (1 children)

😭♥️♥️

[–] Barometer3689@feddit.nl 8 points 1 day ago (1 children)

To be fair, my father tends to make messages quite incomprehensible by adding irrelevant information all over the place. Sometimes going on for multiple screens while it could easily have been a 2-3 sentence message.

Sadly I think AI would even be worse at picking up what information is important from that. But I understand why people want it.

As for very active groupchats, I am not gonna read +25 messages a day, but being able to glance the gist of it would be awesome.

[–] DarrinBrunner@lemmy.world 8 points 1 day ago (1 children)

The exact point at which the gist of it can be manipulated, leaving out context and nudging you toward a different opinion than you might have formed if you'd read the whole thread.

[–] BananaIsABerry@lemmy.zip -1 points 18 hours ago (1 children)

Friend, I think you need to reconsider your world perspective a bit. Not everyone is out to get you all the time.

[–] Barometer3689@feddit.nl 1 points 15 minutes ago

To be fair, when Facebook was still big the privacy advocates were being branded as paranoid. Those turned out to be right after all.

[–] BigMacHole@sopuli.xyz 67 points 2 days ago (1 children)

CEOs are SO INTELLIGENT! I would NEVER have Thought to invest BILLIONS OF DOLLARS on Chatbots and Summarizers which ALREADY existed!

[–] SnarkoPolo@lemmy.world 23 points 2 days ago

"Caden, it looks like Airlynn just said you're a hopeless loser, and she's been banging your personal trainer Chad. Is there anything else I can help you with?"

[–] Alexander@sub.community 7 points 2 days ago

Tinder needs this function right now!

[–] dream_weasel@sh.itjust.works -2 points 1 day ago

I have one specifically that is mostly bursts of 2 or 3 hours of chat between whoever is online in the group and with some worthwhile coordination messages mixed in at random. I don't want to read 80 messages about mortgage rates and VTI stocks to find a couple of lines I'm actually interested in about kid plans for the evening or something I'd actually care to talk about.

[–] RaivoKulli@sopuli.xyz -3 points 1 day ago

It does make sense in big groups with tons of irrelevant discussion but also few messages you actually need to read.

[–] Affidavit@lemmy.world 0 points 2 days ago (2 children)

I don't use WhatsApp, but this immediately made me think of my dad who doesn't use any punctuation and frequently skips and misspells words. His messages are often very difficult to interpret, through no fault of his own (dyslexia).

Having an LLM do this for me would help both him and me.

He won't feel self conscious when I send a, "What you talkin' about Willis?" message, and I won't have to waste a ridiculous amount of time trying to figure out what he was trying to say.

[–] ayyy@sh.itjust.works 39 points 2 days ago (1 children)

If he’s not communicating in an explicit and clear way the AI can’t help you magically gain context. It will happily make up bullshit that sounds plausible though.

[–] Feyd@programming.dev 19 points 2 days ago

What makes you think the llm will be able to decipher something that already doesn't make sense