this post was submitted on 12 May 2024
76 points (94.2% liked)
Futurology
1814 readers
64 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Claude 3 understood it was being tested... It's very difficult to fathom that that's a defect...
Do you have a source on that one? My current understanding of all the model designs would lead me to believe that kind of "awareness" would be impossible.
https://arstechnica.com/information-technology/2024/03/claude-3-seems-to-detect-when-it-is-being-tested-sparking-ai-buzz-online/
Still not proof of intelligence to me but people want to believe/scare themselves into believing that LLMs are AI.
Thanks for following up with a source!
However, I tend to align more with the skeptics in the article, as it still appears to be responding in a realistic manner and doesn't demonstrate an ability to grow beyond the static structure of these models.
I wasn't the user you originally replied to but I didn't expect them to provide one and I totally agree with you, just another person that started believing that LLM is AI...
Ah, my bad I didn't notice, but do still appreciate the article/source!