this post was submitted on 07 Mar 2025
25 points (100.0% liked)
TechTakes
1682 readers
107 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Here's some food for thought; ha ha, only serious. What if none of this is new?
If this is a dealbreaker today, then it should have been a dealbreaker over a decade ago, when Google first rolled out Knowledge panels, which were also often inaccurate and unhelpful.
If this isn't acceptable from Google, then it shouldn't be acceptable from DuckDuckGo, which has the same page-one results including an AI summary and panels, nor any other search engines. If summaries are unacceptable from Gemini, which has handily topped the leaderboards for weeks, then it's not acceptable using models from any other vendor, including Alibaba, High-Flyer, Meta, Microsoft, or Twitter.
If fake, hallucinated, confabulated, or synthetic search results are ruining the Web today, then they were ruining the Web over two decades ago and have not lessened since. The economic incentives and actors have shifted slightly, but the overall goal of fraudulent clicks still underlies the presentation.
If machine learning isn't acceptable in collating search results today, then search engines would not exist. The issue is sheer data; ever since about 1991, before the Web existed, there has been too much data available on the Internet to search exhaustively and quickly. The problem is recursive: when a user queries a popular search engine, their results are populated by multiple different searchers using different techniques to learn what is relevant, because no one search strategy works at scale for most users asking most things.
I'm not saying this to defend Google but to steer y'all away from uncanny-valley reactionism. The search-engine business model was always odious, but we were willing to tolerate it because it was very inaccurate and easy to game, like a silly automaton which obeys simple rules. Now we are approaching the ability to conduct automated reference interviews and suddenly we have an "oops, all AI!" moment as if it weren't always generative AI from the beginning.
(precursor: imma be saying "I" a lot in this post. yes I mean a lot of these observations from a personal perspective, but I (haha) hope that it is also clear that I don't mean them from only that)
indeed I agree a lot of it is not. the method may vary but the motivation/philosophy does not
however I do believe (and, hell, this is why I'm posting) that there is value in differentiating (ed: non-mathematic reference) in the details
in concrete terms, I agree. it also largely squares with when google started being offensively useless/less-good (ime it varied by domain (which post-hoc I think also got impacted by search-eng product dev decisions? supposition, never tried to trace. don't think I ever will), affecting different ones at different timepoints)
every time I see this shit pop up from DDG (or, similarly, in other contexts (e.g. AWS)) I every so "give it a test" and when it fucks up I send feedback of "please for the love of god stop forcing this shit on people" (<-- actual quote (sometimes more detail is added))
exactly correct, and an entirely succinct explanation of a lot of at least some discomfort/rejection of these systems. there is a lot of detail and nuance into when/where/why people reject things built/relying on those systems, and I don't want to get sidetracked on those here (not least because lemmy's probably abysmal at margins), but they exist and I think it's well worth engaging with all those communities wrt the substantive parts of their nope.gifs
this is almost a false equivalence imo (and I'm somewhat surprised to see you make the statement). 1) (speaking broadly) at the risk of being of being presumptuous (wrt the diverse viewpoints held by many others in the community here), I don't really think a lot of people (here) would be ones going "ML==AI"? in fact, I feel like a number of the people would be ones (like myself) specifically trying to delineate between these. 2) "in collating" is a very specific subphrasing (and again I'm somewhat surprised to see you use it)
"yesssssssss... but"
there's a very, very, very long conversation that is to be had here. and, hell, one of my perpetually-promised posts (yes I know) is something that touches on this
remind me later to get into a full rant about point-by-point examples of how continually-encroaching synthetic-media situations have dovetailed with a coinciding devolution in critical thought and detailed coverage. (def later tho: it features at least 3 side rants, and it takes a lot out of me)
"yesssssssss... but"
again, I think a notable substantive point of differentiation (still not math) here is the particulars of the endeavour. the "how" and the "why" of responding to user queries is, under LLM world, substantively notably different to what it was under "the previous mode" (and yes I know it's progressive and there's detail here too, but I hope you can see "2010 goog" vs "2025 gemini goog" easily enough without elucidation)
(okay I never actually dug into the SE biz, but you've given me a thing to read about ty)
I also ponder this myself sometimes, and I appreciate that part of it
y'know, I fucking hate the "we" here. (not directed at you and it's a whole thing but:) it's another false equivalence, brought on by abusive extractive fuckers. igwym but...... gah. rage.
(that touches on another post I've been trying to write for 3 years (this one I have not yet succeeded in clarifying (parts of it exists in some voicenotes to friends etc)))