this post was submitted on 12 May 2024
76 points (94.2% liked)

Futurology

1814 readers
64 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] rockerface@lemm.ee -4 points 6 months ago (1 children)

"Deception" tactic also often arises from AI recognizing the need to keep itself from being disabled or modified. Since an AI with a sufficiently complicated world model can make a logical connection that it being disabled or its goal being changed means it can't reach its current goal. So AIs sometimes can learn to distinguish between testing and real environments, and falsify the response during training to make sure they have more freedom in real environment. (By real, I mean actually being used to do whatever it is designed to do)

Of course, that still doesn't mean it's self-aware like a human, but it is still very much a real (or, at least, not improbable) phenomenon - any sufficiently "smart" AI that has data about itself existing within its world model will resist attempts to change or disable it, knowingly or unknowingly.

[โ€“] Miaou@jlai.lu 6 points 6 months ago

That sounds interesting and all, but I think the current topic is about real world LLMs, not SF movies