this post was submitted on 08 Sep 2024
66 points (100.0% liked)

Technology

37739 readers
524 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

tacking on a bunch of LLMs sure is a way to "make the web more human".

top 19 comments
sorted by: hot top controversial new old
[–] furzegulo@lemmy.dbzer0.com 58 points 2 months ago (1 children)

please stop. just fucking stop shoving this shit into everything.

[–] melroy@kbin.melroy.org 15 points 2 months ago

Too late.. everybody is doing this shizzle. I can't take it anymore.

[–] halm@leminal.space 42 points 2 months ago (1 children)

I didn't want to pay for their search engine before, and this garbage sure as hell isn't going to change my mind.

[–] floofloof@lemmy.ca 17 points 2 months ago* (last edited 2 months ago)
[–] FlashMobOfOne@beehaw.org 22 points 2 months ago
[–] e8d79@discuss.tchncs.de 16 points 2 months ago (1 children)

Kagi was founded as an AI company so this is not surprising. I unsubscribed from them after learning that. Also, their CEO is a weirdo who harasses people critical of their product and he thinks the GDPR is optional.

[–] averyminya@beehaw.org 5 points 2 months ago

It's funny, I've been thinking a lot about people's acknowledgement of faults or shortcomings and choosing to ignore them, whether it's because they agree, don't care, or think it doesn't matter. Or don't agree and there's no better alternative, or it's the least bad alternative. I dunno.

In the public internet spaces like Facebook, discord, the others, I've been seeing a lot of this happening recently with Linkin Park's new singer. Some are happy and ignorant, some know and don't care, some know and are saddened. There is a lot of vitrol between the people who know and are saddened and the people who don't know/don't care. This is just one example from this week, but it happens every week to every story. It can be, probably, literally applied to anything. People's level of information heavily biases them from their predisposed beliefs (as in, if they already have an opinion, chances are that the opinion will not change when presented with new information).

In our spaces I see it with Brave. I see it with Kagi. We all saw it with Unity en masse and something actually happened about that, but even so people are still using Unity today, albeit I would guess out of necessity, or now ignorance since time has passed (not saying ignorance here is a fault). Before then we saw it with Audacity. Can't forget Reddit, where a significant chunk of users are now participating here instead. And.. yet... Reddit still exists, nearly in full.

It's such a crazy phenomena with how opinions are formed from emotional judgements based on the level of information they have, and due to our current state of informational sharing there are microcosms of willful ignorance. And some aren't ignorant, it just doesn't matter to them.

[–] hersh 16 points 2 months ago

I posted some of my experience with Kagi's LLM features a few months ago here: https://literature.cafe/comment/6674957 . TL;DR: the summarizer and document discussion is fantastic, because it does not hallucinate. The search integration is as good as anyone else's, but still nothing to write home about.

The Kagi assistant isn't new, by the way; I've been using it for almost a year now. It's now out of beta and has an improved UI, but the core functionality seems mostly the same.

As far as actual search goes, I don't find it especially useful. It's better than Bing Chat or whatever they call it now because it hallucinates less, but the core concept still needs work. It basically takes a few search results and feeds them into the LLM for a summary. That's not useless, but it's certainly not a game-changer. I typically want to check its references anyway, so it doesn't really save me time in practice.

Kagi's search is primarily not LLM-based and I still find the results and features to be worth the price, after being increasingly frustrated with Google's decay in recent years. I subscribed to the "Ultimate" Kagi plan specifically because I wanted access to all the premium language models, since subscribing to either ChatGPT or Claude would cost about the same as Kagi, while Kagi gives me access to both (plus Mistral and Gemini). So if you're interested in playing around with the latest premium models, I still think Kagi's Ultimate plan is a good deal.

That said, I've been disappointed with the development of LLMs this year across the board, and I'm not convinced any of them are worth the money at this point. This isn't so much a problem with Kagi as it is with all the LLM vendors. The models have gotten significantly worse for my use cases compared to last year, and I don't quite understand why; I guess they are optimizing for benchmarks that simply don't align with my needs. I had great success getting zsh or Python one-liners last year, for example, whereas now it always seems to give me wrong or incomplete answers.

My biggest piece of advice when dealing with any LLM-based tools, including Kagi's, is: don't use it for anything you're not able to validate and correct on your own. It's just a time-saver, not a substitute for your own skills and knowledge.

[–] belated_frog_pants@beehaw.org 12 points 2 months ago (1 children)

Welp and there goes any reason to try it. God i hate AI.

[–] westyvw@lemm.ee 4 points 2 months ago* (last edited 2 months ago) (1 children)

Do you really hate algorithms (since AI doesn't really exist yet) or do you hate the hype and marketing?

[–] SweetCitrusBuzz@beehaw.org 11 points 2 months ago

Well, the web shouldn't be human. But if they were to attempt to make it then LLMs would not be the way.

[–] lenninscjay@lemm.ee 4 points 2 months ago (1 children)

I’ve used some of these features when I’m trying to skim many articles for my grad school work. It’s not terrible.

There is a use case for this stuff. Especially in a search engine.

Short of hosting your own LLM, Kagi is one of the few I’d hope can get it right and respect privacy. (So far unverified on the AI side tho)

[–] noodlejetski@lemm.ee 12 points 2 months ago (3 children)
[–] FaceDeer@fedia.io 6 points 2 months ago (1 children)

It's often not a choice between an AI-generated summary and a human-generated one, though. It's a choice between an AI-generated summary and no summary.

[–] noodlejetski@lemm.ee 5 points 2 months ago* (last edited 2 months ago)

so, no summary at all, or one that does shit job pointing out important bits or gets them wrong and therefore isn't a proper summary? choices, choices.

[–] Cenotaph@mander.xyz 5 points 2 months ago

Kagi actually does an interesting implementation for their search summary and while not perfect, it is miles better than the alternatives in my experience. It uses a combination of anthropic's claude for language processing as well as incorporates wolfram alpha for stuff that needs numerical accuracy. Compared to google AI or copilot I've been seeing good results.

While it isn't perfect at summarizing, I've found their implementation to be "good enough", and it can summarize pieces near instantly, which I think is the place where it actually becomes useful. Humans may be better, but I dont have the money or time to pay a human to summarize pages for me to see if they're going to be useful to delve further into.

[–] lenninscjay@lemm.ee 2 points 2 months ago

Well that’s a bummer. I believe it.

[–] Mesa@programming.dev 1 points 2 months ago

Hot take: the web should not be more human.

And I'm pretty progressive on technological matters. There should still be a clear separation, though.