this post was submitted on 01 Apr 2025
713 points (98.9% liked)
Programmer Humor
22165 readers
1554 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The core language model isn't a nueral network? I agree that the full application is more Markov chainy but I had no idea the LLM wasn't.
Now I'm wondering if there are any models that are actual neutral networks
I'm not an expert. I'd just expect a neural network to follow the core principle of self-improvement. GPT is fundamentally unable to do this. The way it "learns" is closer to the same tech behind predictive text in your phone.
It's the reason why it can't understand why telling you to put glue on pizza is a bad idea.
the main thing is that the system end-users interact with is static. it's a snapshot of all the weights of the "neurons" at a particular point in the training process. you can keep training from that snapshot for every conversation, but nobody does that live because the result wouldn't be useful. it needs to be cleaned up first. so it learns nothing from you, but it could.
"Improvement" is an open ended term. Would having longer or shorter toes be beneficial? Depends on the evolutionary environment.
ChatGPT does have a feedback loop. Every prompt you give it affects its internal state. That's why it won't give you the same response next time you give the same prompt. Will it be better or worse? Depends on what you want.