this post was submitted on 05 May 2025
433 points (95.6% liked)

Technology

69815 readers
3626 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] just_another_person@lemmy.world 27 points 3 days ago (3 children)

Not trying to speak like a prepper or anythingz but this is real.

One of neighbor's children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.

Something needs to be done.

[–] toastmeister@lemmy.ca 36 points 3 days ago

Like what, some kind of parenting?

[–] besselj@lemmy.ca 15 points 3 days ago (1 children)
[–] FaceDeer@fedia.io 15 points 3 days ago (3 children)

This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I've seen about it leave out a bunch of significant details so that it ends up sounding more of an "ooh, scary AI!" Story (baits clicks better) rather than a "parents not paying attention to their disturbed kid's cries for help and instead leaving loaded weapons lying around" story (as old as time, at least in America).

load more comments (3 replies)
load more comments (1 replies)
[–] Krimika@lemmy.world 2 points 2 days ago

Sounds like Mrs. Davis.

[–] hendrik@palaver.p3x.de 12 points 3 days ago (1 children)

Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the "truth" and path to enlightenment is hidden within a service of a big tech company?

[–] iAvicenna@lemmy.world 11 points 3 days ago (2 children)

well because these chatbots are designed to be really affirming and supportive and I assume people with such problems really love this kind of interaction compared to real people confronting their ideas critically.

load more comments (2 replies)
[–] Zozano@aussie.zone 15 points 3 days ago* (last edited 3 days ago) (33 children)

This is the reason I've deliberately customized GPT with the follow prompts:

  • User expects correction if words or phrases are used incorrectly.

  • Tell it straight—no sugar-coating.

  • Stay skeptical and question things.

  • Keep a forward-thinking mindset.

  • User values deep, rational argumentation.

  • Ensure reasoning is solid and well-supported.

  • User expects brutal honesty.

  • Challenge weak or harmful ideas directly, no holds barred.

  • User prefers directness.

  • Point out flaws and errors immediately, without hesitation.

  • User appreciates when assumptions are challenged.

  • If something lacks support, dig deeper and challenge it.

I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

load more comments (33 replies)
[–] wwb4itcgas@lemm.ee 14 points 3 days ago (25 children)

Our species really isn't smart enough to live, is it?

load more comments (25 replies)
load more comments
view more: ‹ prev next ›