this post was submitted on 25 Dec 2023
126 points (95.0% liked)
Hacker News
4123 readers
3 users here now
This community serves to share top posts on Hacker News with the wider fediverse.
Rules
0. Keep it legal
- Keep it civil and SFW
- Keep it safe for members of marginalised groups
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
[Double reply to avoid editing my earlier comment]
From the HN thread:
I think that the first sentence is accurate, but I disagree with the second one.
Probabilistic likelihood is not enough to create a good illusion of understanding/intelligence. Relying on it will create situations as in the OP, where the bot outputs nonsense because of an unexpected prompt.
To avoid that, the model would need some symbolic (or semantic, or conceptual) layer[s], and handle the concepts being conveyed by the tokens, not just the tokens themselves. But that's already closer to intelligence than to prob likelihood.