this post was submitted on 10 Oct 2023
165 points (93.7% liked)
Technology
59300 readers
4713 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We're getting customers that want to use LLMs to query databases and such. And I fully expect that to work well 95% of the time, but not always, while looking like it always works correctly. And then you can tell customers a hundred times that it's not 100% reliable, they'll forget.
So, at some point, that LLM will randomly run a complete non-sense query, returning data that's so wildly wrong that the customers notice. And precisely that is the moment when they'll realize, holy crap, this thing isn't always reliable?! It's been telling us inaccurate information 5% of the usages?! Why did no one inform us?!?!?!
And then we'll tell them that we did inform them and no, it cannot be fixed. Then the project will get cancelled and everyone lived happily ever after.
Or something. Can't wait to see it.
Would you trust a fresh out of college intern to do it? That's been my metric for relying on LLM's
Yup this is the way to think about LLMs, infinite eager interns willing to try anything and never trusting themselves to say "I dont know"
It might actually help the intern if they use it:
https://www.consultancy.uk/news/35431/chatgpt-most-benefits-below-average-consultants-finds-bcg-pilot
I've been speculating that people raving about these things are just bad at their jobs for a bit, I've never been able to get anything useful out of an llm.
If you have a job that involves diagnosing, or a wide array of different problems that change day to day it's extremely useful. If you do the same thing over and over again it may not be as much.
You’re right but it’s worse than that. I have been in the game for decades. One bum formula and the whole platform loses credibility. There isn’t a customer on the planet who’ll look at us as 5%.