this post was submitted on 03 Jun 2024
1292 points (96.4% liked)
Technology
59201 readers
2994 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Let's agree to disagree then. An LLM has no notion of semantics, it's just outputting the most likely word to follow up to what it's already written and the user's input.
On the contrary, expert systems from back in the 90s for, say, predicting the atomic structure of an element, work like a human brain on steroids. It features an arbitrary large search tree that the software knows how to iterarively prune according to a well known set of chemical rules. We do the same when analyzing a set of options.
Debugging "current" AI models, on the other hand, is impossible because all we're doing is prescripting a composition of functions and forcing it to minimize a loss function. That's all we're doing. How can you currently tell that a certain model is going to work? Unless the mathematical theory ever catches up with the technology, we'll never know until we execute the code.