this post was submitted on 30 May 2025
24 points (74.0% liked)

Ask Lemmy

31947 readers
1799 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

top 50 comments
sorted by: hot top controversial new old
[–] Seasoned_Greetings@lemm.ee 4 points 1 day ago* (last edited 1 day ago) (1 children)

My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.

They obviously missed the "AI Generated" tag on the Google search and couldn't figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn't exist.

These are average people and they didn't realize that they were even using ai much less how unreliable it can be.

I think there's going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.

When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.

With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.

[–] Kolanaki@pawb.social 9 points 1 day ago (1 children)

Maybe in the sense that the Internet may become so inundated with AI garbage that the only way to get factual information is by actually reading a book or finding a real person to ask, face to face.

[–] SpicyColdFartChamber@lemm.ee 5 points 1 day ago* (last edited 1 day ago) (1 children)

You know how the steel from prenuclear proliferation is prized? I wonder if that's going to happen with data from before 2022 as well now. Lol.

[–] chaosCruiser@futurology.today 3 points 1 day ago (1 children)

There might be a way to mitigate that damage. You could categorize the training data by the source. If it's verified to be written by a human, you could give it a bigger weight. If not, it's probably contaminated by AI, so give it a smaller weight. Humans still exist, so it's still possible to obtain clean data. Quantity is still a problem, since these models are really thirsty for data.

[–] Tar_alcaran@sh.itjust.works 2 points 1 day ago (1 children)

LLMs can't distinguish truth from falsehoods, they only produce output that resembles other output. So they can't tell the difference between human and AI input.

That's a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.

When that approach stops working, AI companies need to figure out a way to get high quality data, and that's when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn't even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.

[–] kalkulat@lemmy.world 6 points 1 day ago* (last edited 1 day ago) (3 children)

Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.

A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.

[–] Tar_alcaran@sh.itjust.works 5 points 1 day ago

Trouble is that 'quick answers' mean the LLM took no time to do a thorough search.

LLMs don't "search". They essentially provide weighted parrot-answers based on what they've seen elsewhere.

If you tell an LLM that the sky is red, they will tell you the sky is red. If you tell them your eyes are the colour of the sky, they will repeat that your eyes are red. LLMs aren't capable of checking if something is true.

Theyre just really fast parrots with a big vocabulary. And every time they squawk, it burns a tree.

Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.

For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you'll be ok.

[–] Wolf314159@startrek.website 1 points 1 day ago (1 children)

Is your abuse of the ellipsis and dashes supposed to be ironic? Isn't that a LLM tell?

I'm not even sure what the ('phrase') construct is even meant to imply, but it's wild. Your abuse of punctuation in general feels like a machine trying to convince us it's human or a machine transcribing a human's stream of consciousness.

[–] FeelzGoodMan420@eviltoast.org 15 points 2 days ago (1 children)

Probably, however I will not be doing that because LLM models are dogshit and hallucinate bullshit half the time. I wouldn't trust a single fucking thing that a LLM provides.

[–] chaosCruiser@futurology.today 6 points 2 days ago (1 children)

Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.

[–] FeelzGoodMan420@eviltoast.org 5 points 2 days ago

Haha. Yes I'll be a tech Boomer. Stuck in my old ways. Although answers on forums are often straight misinformation so really there's no perfect solution to get answers. You just have to cross check as many sources as possible.

[–] oakey66@lemmy.world 19 points 2 days ago (4 children)

No. It hallucinates all the time.

load more comments (4 replies)
[–] psx_crab@lemmy.zip 14 points 2 days ago (2 children)

And where does LLM take the answer? Forum and socmed. And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.

So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.

[–] oyo@lemm.ee 4 points 2 days ago (1 children)

The problem is that the LLMs have stolen all that information, repackaged it in ways that are subtly (or blatantly) false or misleading, and then hidden the real information behind a wall of search results that are entire domains of ai trash. It's very difficult to even locate the original sources or forums anymore.

load more comments (1 replies)
load more comments (1 replies)

to an extent, yes, but not completely

[–] quediuspayu@lemmy.world 7 points 2 days ago (5 children)

LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

What they call hallucinations in other areas was called fabulations, to invent tales or stories.

I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.

[–] FaceDeer@fedia.io 2 points 2 days ago (1 children)

LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

This applies equally well to human-generated answers to stuff.

[–] quediuspayu@lemmy.world 2 points 2 days ago

True, the difference is that with humans it's usually more public, it is easier for someone to call bullshit. With LLMs the bullshit is served with the intimacy of embarrassing porn so is less likely to see any warnings.

load more comments (4 replies)
[–] Oberyn@lemmy.world 3 points 2 days ago

If the tech matures enough , potentially !

Not wrong about LLMs (currently )? bad with tech support , but so are search engines lol

[–] Rhynoplaz@lemmy.world 6 points 2 days ago (1 children)

There have been enough times that I googled something, saw the AI answer at the top, and repeated it like gospel. Only to look like a buffoon when we realize the AI was completely wrong.

Now I look right past the AI answer and read the sources it's pulling from. Then I don't have to worry about anything misinterpreting the answer.

[–] Quazatron@lemmy.world 9 points 2 days ago (5 children)

True, but soon the sources will be AI generated too, in a big GIGO loop.

load more comments (5 replies)
[–] FaceDeer@fedia.io 3 points 2 days ago (2 children)

People will use whatever method of finding answers that works best for them.

Stuck, you contact tech support, wait weeks for a reply, and the cycle continues

Why didn't you post a question on a public forum in that scenario? Or, in the future, why wouldn't the AI search agent itself post a question? If questions need to be asked then there's nothing stopping them from still being asked.

[–] Dragonstaff@leminal.space 2 points 2 days ago (3 children)

If you cut a forum's population by 90% it will die.

This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things, it will starve the channels that can answer the things it can't (including everything new).

load more comments (3 replies)
[–] chaosCruiser@futurology.today 2 points 2 days ago (1 children)

That is an option, and undoubtedly some people will continue to do that. It’s just that the number of those people might go down in the future.

Some people like forums and such much more than LLMs, so that number probably won’t go down to zero. It’s just that someone has to write that first answer, so that eventually other people might benefit from it.

What if it’s a very new product and a new problem? Back in the old days, that would translate to the question being asked very quickly in the only place where you can do that - the forums. Nowadays, the first person to even discover the problem might not be the forum type. They might just try all the other methods first, and find nothing of value. That’s the scenario I was mainly thinking of.

[–] FaceDeer@fedia.io 3 points 2 days ago (2 children)

I did suggest a possible solution to this - the AI search agent itself could post a question in a forum somewhere if has been unable to find an answer.

This isn't a feature yet of mainstream AI search agents but I've been following development and this sort of thing is already being done by hobbyists. Agentic AI workflows can be a lot more sophisticated than simple "do a search summarize results." An AI agent could even try to solve the problem itself - reading source code, running tests in a sandbox, and so forth. If it figures out a solution that it didn't find online, maybe it could even post answers to some of those unanswered forum questions. Assuming the forum doesn't ban AI of course.

Basically, I think this is a case of extrapolating problems without also extrapolating the possibilities of solutions. Like the old Malthusian scenario, where Malthus projected population growth without also accounting for the fact that as demand for food rises new technologies for making food production more productive would also be developed. We won't get to a situation where most people are using LLMs for answers without LLMs being good at giving answers.

load more comments (2 replies)
load more comments
view more: next ›