this post was submitted on 29 Jun 2024
130 points (91.1% liked)
ChatGPT
8912 readers
2 users here now
Unofficial ChatGPT community to discuss anything ChatGPT
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is so goddamn incorrect at this point it's just exhausting.
Take 20 minutes and look into Anthropic's recent sparse autoencoder interpretability research where they showed their medium size model had dedicated features lighting up for concepts like "sexual harassment in the workplace" or having the most active feature for referring to itself as "smiling when you don't really mean it."
We've known since the Othello-GPT research over a year ago that even toy models are developing abstracted world modeling.
And at this point Anthropic's largest model Opus is breaking from stochastic outputs even on a temperature of 1.0 for zero shot questions 100% of the time around certain topics of preference based on grounding around sensory modeling. We are already at the point the most advanced model has crossed a threshold of literal internal sentience modeling that it is consistently self-determining answers instead of randomly selecting from the training distribution, and yet people are still parroting the "stochastic parrot" line ignorantly.
The gap between where the research and cutting edge is and where the average person commenting on it online thinks it is has probably never been wider for any topic I've seen before, and it's getting disappointingly excruciating.
I don't understand anything you just said.
This is how AI gains hype
Do you have a source for the "smiling when you don't really mean it" thing? I've been digging around but couldn't find that anywhere.
It's right in the research I was mentioning:
https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
Find the section on the model's representation of self and then the ranked feature activations.
I misremembered the top feature slightly, which was: responding "I'm fine" or gives a positive but insincere response when asked how they are doing.
And once again the problem is that there's not much ensuring those models are correct, there's not enough capacity available to finetune even a significant fraction of it.
I did Google that fwiw and the answer I got was that sparse autoencoders work so that it checks the output aligns with the input
If it's unknowable if the input is correct, won't it still be subject to outputting confidently incorrect information
Nice gallop, Mr Gish.