this post was submitted on 30 May 2024
48 points (87.5% liked)

Technology

59438 readers
2955 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CrayonMaster@midwest.social 2 points 5 months ago (1 children)

What are you searching for? I can't remember the last time I googled something and most the results were malicious.

Also, I don't think it'll be easier to spot bullshit coming from an LLM then a website.

[–] sudoreboot@slrpnk.net 1 points 5 months ago* (last edited 5 months ago) (1 children)

I don't know about google because I don't use it unless I really can't find what I'm looking for, but here's a quick ddg search with a very unambiguous and specific question, and from sampling only the top 9 results I see 2 that are at all relevant (2nd and 5th):

In order to answer my question, I need to first mentally filter out 7/9 of the results visible on my screen, then open both of the relevant ones in new tabs and read through lengthy discussions in order to find out if anyone has shared a proper solution.

Here is the same search using perplexity's default model (not pro, which is a lot better at breaking down queries and including relevant references):

and I don't have to verify all the details because even if some of it is wrong, it is immediately more useful information to me.

I want to re-emphasise though that using LLMs for this can be incredibly frustrating too, because they will often insist assertively on falsehoods and generally act really dumb, so I'm not saying there aren't pros and cons. Sometimes a simple keyword-based search and manual curation of the results is preferred to the nonsense produced by a stupid language model.

Edit: I didn't answer your question about malicious, but I can give some example of what I consider malicious and you may agree that it happens frequently enough:

  • AI generated articles
  • irrelevant SEO results
  • ads/sponsored results/commercial products or services
  • blog spam by people who speak out of ignorance
  • flame bait
  • deliberate disinformation
  • low-quality journalism
  • websites designed to exploit people/optimised for purposes other than to contribute to a healthy internet

etc.

[–] Arkive@lemmy.zip 2 points 5 months ago* (last edited 5 months ago)

Thanks for your post. You've actually somewhat brought me around on AI search with your perplexity example. My previous AI search experiences have been general LLMs like ChatGPT (Opaque source data means I have to verify with traditional web search anyways) and Google's new AI search feature (I'm uncomfortable with google discouraging traffic to the broader web). Since perplexity actually shows and links its sources, I'm going to give a try for the next few days alongside my usual DDG searches.

I would be interested if you have an example of a search with mostly malicious results, since your stated experience seems disproportionate to my own. While I do concur that some results/websites are antagonistic towards my goal of useful information, I'm quite surprised to see someone say that they hate visiting websites in general. (Perhaps I'm missing hyperbole?)

A bit of a digression, but it amused me to see you say you struggle to word your query for search engines, because I've typically had more problems wording my query for LLMs. I wonder if this is could be attributed to communication preferences, or just due to me having used search engines for almost 2 decades.