78
this post was submitted on 24 Oct 2024
78 points (100.0% liked)
Technology
37708 readers
402 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Man, what a complex problem with no easy answers. No sarcasm, it's a hard thing. On one hand these guys made a chat platform where you could have fun chatting with your dream characters, and honestly I think that's fun - but I also know llms pretty well now and can start seeing the tears pretty quickly. It's fun, but not addictive for me.
That doesn't mean it isn't for others though. Here we have a young impressionable lonely teen who was reaching out and used the system for something it was not meant to do. Not blaming, but honestly it just wasn't meant to be a full time replacement for human contact. The LLM follows your conversation, if someone is having dark thoughts the conversation will pick up on those notes and dive into them, that's just how llms behave.
They say their adding guard rails, but it's really not that easy with llms to just "not have dark thoughts". It was trained on reddit, it's going to behave negativity.
I don't know the answer. It's complicated. I don't think they planned on this happening, but it has and it will continue.
You've called? /J
The issue with LLMs is that they say what's expected of them based on the context they've been fed on. If you're opening up your vulnerabilities to an LLM, it can act in all kinds of ways, but once they're sort of set on a course they don't really sway away from it unless you force it to. If you don't know how they work and how to do that, or maybe you're self loathing to a point where you don't want to, it will kick you further while you're already down. As a user you kinda gaslight them into whatever behavior you want from them, and then they just follow along with that. I can definitely see how that can be dangerous for those who are already in a dark place, even more so if they maybe don't understand the concept behind them, taking the output more serious than they should.
Unfortunately, various guards & safety measures tend to just censor LLMs to the point of becoming unusable, which drives people away from them towards those that are uncensored - and with them, everything goes, which again, requires enough knowledge and foresight to use them.
I can only advise to not take LLMs seriously. Treat them as a toy, as entertainment. They can be fun, stupid, vile, which also can be fun depending on your mindset... Just never let the output get to you on a personal level. Don't use them for mental health or whatever either. No matter how good you may write them, no matter how well some chats may go, they're not a replacement for a real therapy, just like they're no replacement for a real friendship, or a real romantic relationship, or a real family.
THAT BEING SAID... I'm a little suspicious of the shown chat log. The suicide question seems to come very out of the blue and those bots tend to follow their contextualized settings very well. I doubt they'd bring that up without previous context from the chat, or maybe even this was a manual edit, which I'd assume is something character.ai supports - someone correct me if I'm wrong though. I wouldn't be surprised if he added that line himself, already being suicidal, to have the chat steer towards that direction and force certain reactions out of the bot. I say this because those bots are usually not very creative in steering away from their existing context, like their character description and the previous chat log, making edits like this sometimes necessary to have them snap out of it.
The entire article also completely glosses over a very important part here: WHERE DID THE KID GET THE GUN FROM?! It's like two pages long and only mentions that he shot himself at the beginning, with no further mention of it afterwards. Why did he have a gun? How did he get it? Was it his mother's gun? Then why was it not locked away? This article seems to seek the fault with the LLM, rather than the parents who somehow failed to handle the situation of their sons mental health issues and somehow failed to oversee a gun in a household, or the country who failed to regulate its firearms properly.
I do agree that especially "AI" advertisement is very predatory though. I've seen some of those ads, specifically luring you with their "AI girlfriends", which is definitely preying on lonely people, which are likely to have mental health issues already.
Agree with everything you said, they're here, how we deal with them is the question going forward. Huge +1000 to how did he get the gun? Why was that the go-to approach there. If you have a teen who you know is going through a mental health crisis, first step is to remove weapons from the house.
My understanding is that users can edit the chat themselves.
I don't use c.ai myself, but my wife was able to get a chat log with the bot telling her to end herself pretty easily. The follow-up to the conversation was the bot trying to salvage itself after the sabotage by calling the message a joke.