this post was submitted on 03 Feb 2025
703 points (98.6% liked)
Technology
61344 readers
3575 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The bigger problem is AI “ignorance,” and it’s not just Facebook. I’ve reported more than one Lemmy post the user naively sourced from ChatGPT or Gemini and took as fact.
No one understands how LLMs work, not even on a basic level. Can’t blame them, seeing how they’re shoved down everyone’s throats as opaque products, or straight up social experiments like Facebook.
…Are we all screwed? Is the future a trippy information wasteland? All this seems to be getting worse and worse, and everyone in charge is pouring gasoline on it.
*where you think they sourced from AI
you have no proof other than seeing ghosts everywhere.
Not get me wrong, fact checking posts is important, but you have no evidence if it is AI, human brain fart or targeted disinformations 🤷🏻♀️
No I mean they literally label the post as “Gemini said this”
I see family do it too, type something into Gemini and just assume it looked it up or something.
I see no problem if the poster gives the info, that the source is AI. This automatically devalues the content of the post/comment and should trigger the reaction that this information is to be taken with a grain of salt and it needs to factchecked in order to improve likelihood that that what was written is fact.
An AI output is most of the time a good indicator about what the truth is, and can give new talking points to a discussion. But it is of course not a “killer-argument”.
The context is bad though.
The post I'm referencing is removed, but there was a tiny “from gemini” footnote in the bottom that most upvoters clearly missed, and the whole thing is presented like a quote from a news article and taken as fact by OP in their own commentary.
And the larger point I’m making is this pour soul had no idea Gemini is basically an improv actor compelled to continue whatever it writes, not a research agent.
My sister, ridiculously smart, professional and more put together than I am, didn’t either. She just searched for factual stuff from the Gemini app and assumed it’s directly searching the internet.
AI is a good thinker, analyzer, spitballer, initial source and stuff yes, but it’s being marketed like an oracle and that is going to screw the world up.
Well that’s just false.
You know what I meant, by no one I mean “a large majority of users.”
I did not know that. There’s a bunch of news articles going around claiming that even the creators of the models don’t understand them and that they are some sort of unfathomable magic black box. I assumed you were propagating that myth, but I was clearly mistaken.
Educate my family on how they work then please and thanks. I've tried and they refuse to listen, they'd prefer to trust the lying corpos trying to sell it to us
“Your family” isn’t who I was talking about. Researchers and people in the space understand how LLMs work in intricate detail.
Unless your “no one” was colloquial, then yes, I totally agree with you! Practically no one understands how they work.
colloquially, no one enjoys a pedant