this post was submitted on 28 Jun 2025
90 points (100.0% liked)
TechTakes
1999 readers
176 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Personally, I'd prefer deleting such models and banning them altogether. Chatbots are designed to tell people what they want to hear, and to make people become friends with them - the mental health crises we are seeing are completely by design.
I think most cons, scams and cults are capable of damaging vulnerable people's mental health even beyond the most obvious harms. The same is probably happening here, the only difference being that this con is capable of auto-generating its own propaganda/PR.
I think this was somewhat inevitable. Had these LLMs been fine-tuned to act like the mediocre autocomplete tools they are (rather than like creepy humanoids), nobody would have paid much attention to them, and investors would have started to focus on the high cost of running them quickly.
This somewhat reminds me of how cryptobros used to claim they were fighting the "legacy financial system", yet they were creating a worse version (almost a parody) of it. This is probably inevitable if you are running an unregulated financial system and are trying to extract as much money from it as possible.
Likewise, if you have a tool capable of messing with people's minds (to some extent) and want to make a lot of money from it, you are going to end up with something that resembles a cult, an LLM or similarly toxic groups.