this post was submitted on 05 Jun 2025
968 points (98.7% liked)

Not The Onion

16706 readers
1201 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
(page 4) 39 comments
sorted by: hot top controversial new old
[–] SharkEatingBreakfast@sopuli.xyz 17 points 1 week ago (1 children)

An OpenAI spokesperson told WaPo that "emotional engagement with ChatGPT is rare in real-world usage."

In an age where people will anthropomorphize a toaster and create an emotional bond there, in an age where people are feeling isolated and increasingly desperate for emotional connection, you think this is a RARE thing??

ffs

load more comments (1 replies)
[–] Godric@lemmy.world 15 points 1 week ago (1 children)

Cats can have a little salami, as a treat.

[–] Fizz@lemmy.nz 3 points 1 week ago

You're done for the next headline will be: "Lemmy user tells recovering chonk that he can have a lil salami as a treat"

[–] Gorilladrums@lemmy.world 12 points 1 week ago (2 children)

LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don't.

These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they've been trained on and the parameters they've been given. You can think of their results as "targeted randomness" which is why their results are close or sound convincing but are never quite right.

That's because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that's about it. They should never be used for anything serious like medical, legal, or life advice.

[–] ImADifferentBird@lemmy.blahaj.zone 8 points 1 week ago (1 children)

The problem is, these companies are actively pushing that false perception, and trying to cram their chatbots into every aspect of human life, and that includes therapy. https://www.bbc.com/news/articles/ced2ywg7246o

[–] Gorilladrums@lemmy.world 4 points 1 week ago

That's because we have no sensible regulation in place. These tools are supposed to regulated the same way we regulate other tools like the internet, but we just don't any serious pushes for that in government.

[–] DharmaCurious@startrek.website 0 points 1 week ago (1 children)

This is what I keep trying to tell my brother. He's anti-AI, but to the point where he sees absolutely no value in it at all. Can't really blame him considering stories like this. But they are incredibly useful for brainstorming, and recently I've found chat gpt to be really good at helping me learn Spanish, because it's conversational. I can have conversations with it in Spanish where I don't feel embarrassed or weird about making mistakes, and it corrects me when I'm wrong. They have uses. Just not the uses people seem to think they have

[–] Gorilladrums@lemmy.world -1 points 1 week ago (1 children)

AI is the opposite of crypto currency. Crypto is a solution looking for a problem, but AI is a solution for a lot of problems. It has relevance because people find it useful, there's demand for it. There's clearly value in these tools when they're used the way they're meant to be used, and they can be quite powerful. It's unfortunate how a lot of people are misinformed about these LLM work.

load more comments (1 replies)
[–] ivanafterall@lemmy.world 6 points 1 week ago

The article doesn't seem to specify whether Pedro had earned the treat for himself? I don't see the harm in a little self-care/occasional treat?

[–] pixxelkick@lemmy.world 4 points 1 week ago* (last edited 1 week ago) (2 children)

Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that

If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.

ChatGPT isn't anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it's not hard.

So if you wrote an article about how "gpt said this" or "gpt said that" you better include the full context or I'll assume you are 100% bullshit

[–] cogitase@lemmy.dbzer0.com 9 points 1 week ago (1 children)

Anytime an article posts shit like this but neglects to include the full context,

They link directly to the journal article in the third sentence and the full pdf is available right there. How is that not tantamount to including the full context?

https://arxiv.org/pdf/2411.02306

[–] pixxelkick@lemmy.world 4 points 1 week ago (1 children)

Cool

The paper clearly is about how a specific form of training on a model causes the outcome.

The article is actively disinformation then, it frames it as a user and not a scientific experiment, and it says it was Facebook llama model, but it wasn't.

It was a further altered model of llama that was further trained to do this

So, as I said, utter garbage journalism.

The actual title should be "Scientific study shows training a model based off user feedback can produce dangerous results"

load more comments (1 replies)
[–] gwildors_gill_slits@lemmy.ca 2 points 1 week ago

You're not wrong but also there's a ton of misinformation out there, both due to bad journalism and also pro-LLM advocates, that is selling the idea that LLMs are actually real AI that is able to think and reason and is operating within ethical boundaries of some kind.

Neither of those things are true but that's what a lot of available information about LLMs would have you believe so it's not difficult to imagine someone engaging with a chatbot ending up with a similar result without trying to force it explicitly via prompt engineering.

[–] LovableSidekick@lemmy.world 4 points 1 week ago* (last edited 1 week ago) (2 children)

But meth is only for Saturdays. Or Tuesdays. Or days with "y" in them.

[–] LordWiggle@lemmy.world 8 points 1 week ago (1 children)

That sucks for when you live in Germany. Not a single day with a Y.

[–] boonhet@lemm.ee 3 points 1 week ago (1 children)
[–] LordWiggle@lemmy.world 4 points 1 week ago* (last edited 1 week ago) (1 children)

Sucks to be French. No Y, no G, no meth.

[–] lunarul@lemmy.world 3 points 1 week ago (1 children)
[–] LordWiggle@lemmy.world 3 points 1 week ago

There's no excuse not to use meth, is there... Unless you're Chinese?

[–] GreenKnight23@lemmy.world 3 points 1 week ago

everyday is meythday if you're spun out enough.

[–] kbal@fedia.io 3 points 1 week ago (1 children)

This slightly diminishes my fears about the dangers of AI. If they're obviously wrong a lot of the time, in the long run they'll do less damage than they could by being subtly wrong and slightly biased most of the time.

[–] TachyonTele@lemm.ee 6 points 1 week ago* (last edited 1 week ago) (1 children)

The problem is there are morons that do what these spicy text predictors spit out at them.

[–] kbal@fedia.io 2 points 1 week ago (1 children)

I'm mean sure they'll still kill a few people along the way, but they're not going to contribute as much to the downfall of all civilization as they might if they weren't constantly revealing their utter mindlessness. Even as it is smart people can be fooled, at least temporarily, into thinking that LLMs understand things and are reliable partners in life.

[–] TachyonTele@lemm.ee 1 points 1 week ago

I agree with you there.

[–] thirdBreakfast@lemmy.world 3 points 1 week ago

> afterallwhynot.jpg

load more comments
view more: ‹ prev next ›