282
Computers make mistakes and AI will make things worse — the law must recognize that
(www.nature.com)
This is a most excellent place for technology news and articles.
Soon there will be modules added to LLMs, so that they can learn real logic and use that to (fact)check the output on their own.
https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/
This is so awesome, watch Yannic explaining it:
https://youtu.be/ZNK4nfgNQpM?si=CN1BW8yJD-tcIIY9
You might be presenting it backwards. We need LLMs to be right-sized for translation between pure logical primitives and human language. Let a theorem prover or logical inference system (probably written in Prolog :-) ) provide the smarts. A LLM can help make the front end usable by regular people.
Here is an alternative Piped link(s):
https://piped.video/ZNK4nfgNQpM?si=CN1BW8yJD-tcIIY9
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.