this post was submitted on 09 Apr 2024
149 points (94.1% liked)

Technology

58150 readers
4317 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A prototype is available, though it's Chrome-only and English-only at the moment. How this'll work is you select some text and then click on the extension, which will try to "return the relevant quote and inference for the user, along with links to article and quality signals".

How this works is it uses ChatGPT to generate a search query, utilizes WP's search API to search for relevant article text, and then uses ChatGPT to extract the relevant part.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] BlueBockser@programming.dev 11 points 5 months ago (1 children)

I'm skeptical given how confident many recent AI models are at making wrong claims. Fact checking seems to be a rather poor use case for current AI models IMO.

This looks less like the LLM is making a claim so much as using an LLM to generate a search query and then read through the results in order to find anything that might relate to the section being searched.

It leans into the things LLMs are pretty good at (summarizing natural language; constructing queries according to a given pattern; checking through text for content that matches semantically instead of literally) and links directly to a source instead of leaning on the thing that LLMs only pretend to be good at (synthesizing answers).