And your argument is that a human will be better than an AI going through that? Because it seems unrelated to the initial argument.
rdsm
joined 3 months ago
There are many projects just search for clones of perplexity most use searxng + llms. I used one recently called yokingma / Search_with_ai But there are others
“Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.“
Have they tested actual SOTA models?