this post was submitted on 01 Jun 2024
1599 points (98.6% liked)
Technology
59533 readers
3198 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?
Personally, that's exactly what's happening to me. I've seen enough that AI can't be trusted to give a correct answer, so I don't use it for anything important. It's a novelty like Siri and Google Assistant were when they first came out (and honestly still are) where the best use for them is to get them to tell a joke or give you very narrow trivia information.
There must be a lot of people who are thinking the same. AI currently feels unhelpful and wrong, we'll see if it just becomes another passing fad.
If so, companies rolling out blatantly wrong AI are doing the world a service and protecting us against subtly wrong AI
Google were the good guys after all????
To be fair, you should fact check everything you read on the internet, no matter the source (though I admit that's getting more difficult in this era of shitty search engines). AI can be a very powerful knowledge-acquiring tool if you take everything it tells you with a grain of salt, just like with everything else.
This is one of the reasons why I only use AI implementations that cite their sources (edit: not Google's), cause you can just check the source it used and see for yourself how much is accurate, and how much is hallucinated bullshit. Hell, I've had AI cite an AI generated webpage as its source on far too many occasions.
Going back to what I said at the start, have you ever read an article or watched a video on a subject you're knowledgeable about, just for fun to count the number of inaccuracies in the content? Real eye-opening shit. Even before the age of AI language models, misinformation was everywhere online.
Here's hoping.
I'm no defender of AI and it just blatantly making up fake stories is ridiculous. However, in the long term, as long as it does eventually get better, I don't see this period of low to no trust lasting.
Remember how bad autocorrect was when it first rolled out? people would always be complaining about it and cracking jokes about how dumb it is. then it slowly got better and better and now for the most part, everyone just trusts their phones to fix any spelling mistakes they make, as long as it's close enough.
There's a big difference between my phone changing caulk to cock and my phone telling me to make pizza with Elmer's glue