225
Google co-founder Sergey Brin suggests threatening AI [with physical violence] for better results
(www.theregister.com)
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
Posts must be:
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
It's not that they "do better". As the article is saying, the AI are parrots that are combining information in different ways, and using "threatening" language in the prompt leads it to combine information in a different way than if using a non-threatening prompt. Just because you receive a different response doesn't make it better. If 10 people were asked to retrieve information from an AI by coming up with prompt, and 9 of them obtained basically the same information because they had a neutral prompt but 1 person threatened the AI and got something different, that doesn't make his info necessarily better. Sergey's definition is that he's getting the unique response, but if it's inaccurate or incorrect, is it better?