this post was submitted on 22 Jun 2025
751 points (94.4% liked)

Technology

71808 readers
4203 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

you are viewing a single comment's thread
view the rest of the comments
[–] namingthingsiseasy@programming.dev 56 points 1 day ago (3 children)

Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.

The other thing that he doesn't understand (and most "AI" advocates don't either) is that LLMs have nothing to do with facts or information. They're just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.

[–] Kyrgizion@lemmy.world 15 points 1 day ago (2 children)

Thinking wikipedia or other unbiased sources will still be available in a decade or so is wishful thinking. Once the digital stranglehold kicks in, it'll be mandatory sign-in with gov vetted identity provider and your sources will be limited to what that gov allows you to see. MMW.

[–] namingthingsiseasy@programming.dev 27 points 1 day ago (1 children)

Wikipedia is quite resilient - you can even put it on a USB drive. As long as you have a free operating system, there will always be ways to access it.

[–] Dead_or_Alive@lemmy.world 11 points 1 day ago (1 children)

I keep a partial local copy of Wikipedia on my phone and backup device with an app called Kiwix. Great if you need access to certain items in remote areas with no access to the internet.

[–] Warl0k3@lemmy.world 5 points 23 hours ago

They may laugh now, but you're gonna kick ass when you get isekai'd.

[–] coolmojo@lemmy.world -4 points 1 day ago (1 children)

Yes. There will be no websites only AI and apps. You will be automatically logged in to the apps. Linux, Lemmy will be baned. We will be classed as hackers and criminals. We probably have to build our own mesh network for communication or access it from a secret location.

[–] JcbAzPx@lemmy.world 1 points 19 hours ago

Can't stop the signal.

[–] theneverfox@pawb.social 0 points 18 hours ago

The other thing that he doesn't understand (and most "AI" advocates don't either) is that LLMs have nothing to do with facts or information. They're just probabilistic models that pick the next word(s) based on context.

That's a massive oversimplification, it's like saying humans don't remember things, we just have neurons that fire based on context

LLMs do actually "know" things. They work based on tokens and weights, which are the nodes and edges of a high dimensional graph. The llm traverses this graph as it processes inputs and generates new tokens

You can do brain surgery on an llm and change what it knows, we have a very good understanding of how this works. You can change a single link and the model will believe the Eiffel tower is in Rome, and it'll describe how you have a great view of the colosseum from the top

The problem is that it's very complicated and complex, researchers are currently developing new math to let us do this in a useful way