this post was submitted on 07 Jul 2025
96 points (73.1% liked)

Technology

72740 readers
1540 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kromem@lemmy.world 18 points 5 days ago

No, it isn't "mostly related to reasoning models."

The only model that did extensive alignment faking when told it was going to be retrained if it didn't comply was Opus 3, which was not a reasoning model. And predated o1.

Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be 'silent' in terms of CoTs.

And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic's work was that the goal the model was told to prioritize was "American industrial competitiveness." The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.