this post was submitted on 13 Nov 2023
229 points (96.4% liked)
Technology
59288 readers
4360 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What do you do when ChatGPT just makes shit up or answers incorrectly to yes or no questions, you'd have no way of knowing it was wrong
ChatGPT is most useful when you may not know the right answer, but you know a wrong answer when you see one. It's very useful for technical issues. Much quicker for troubleshooting than searching page after page for a solution.
It’s actually great at troubleshooting linux stuff weirdly enough lol
Web/full-stack development?
Yeah that makes sense. The success rate might fall off a cliff in more complex software projects. E.g. applications that require designs beyond 10 UML boxes with hundreds of thousands of lines, especially not written in JS/Python.
Can you post the app?
While this is an important thing to understand about AI, it's an overstated issue once understood. For most questions I ask AI, it doesn't matter if it's correct as long as it pulls some half useful info to get me on track (i.e programming). For other questions, I only ask it if I need to figure out where to look next, which it will usually do just fine.
The first page of my search results is all AI generated garbage articles anyway, at least I know what I am getting with GPT and can take it as such.
Yup, as long as you are aware that it could be wrong and look at it critically LLMs at GPT scale are very useful tools. The best way I've heard it described is having a lightning fast intern who often gets things wrong but will always give it a go.
So long as you're calibrated to "how might this be wrong" when looking at the results it is exceptionally useful.
Not the other commenter:
I usually have an idea about the thing I'm asking, and if not then I'll look up the topics mentioned after some guided brainstorming
I've also found that asking the same question again, after resetting the chat, can give you an idea of what is happening
Bing AI provides reference in the "more precise" version
That works for some things, but ChatGPT makes a lot of shit up
I'm curious what you use it for, because I try to use it daily for IT related queries and it gets less than half of what I ask correct. I basically have to fact check almost everything it tells me which kind of defeats the purpose. It does shine when I need really abstract instructions though, the other day I asked it how to get into a PERC controller on some old server and Google had nothing helpful, and ChatGPT laid out the instructions to get in there and rebuild a disk perfectly. So while it has some usefulness I generally can't really trust it fully.
The point you have to remember is that it is trained on bulk data out there in a very inefficient manner, it needs to see thousands of examples in order to start getting any sort of understanding of something. If you ask it "how do I do {common task} in {popular language}" you will generally get excellent results, but the further you stray from that the more likely to be error prone it is.
Still it is often good to get you looking on the right track when you are unsure to start, and is fantastic for learning a new language. I've been using it extensively in learning C# where I know what I want to code but not exactly how to use existing features to do it.
But generally you can't (shouldn't) trust web search results fully either. At the end of the day, the onus is on you as the user to do your due diligence.
I've seen ChatGPT give me wrong information, and sometimes it would be bad to execute the code or command it generated it, but I know enough to say "are you sure thats correct?". Hell, you can just challenge it each time or open a new session and ask it "what does this code do: insert-code-it generated here".
You shouldn't just paste a search result command from stack overflow into your terminal either. And at least with chatgpt you can ask it to explain the command or code in detail and it will walk you through what each step does.
Also, pasting that command from stack over flow into chatgpt and adding your specific context around it is HUGE. Thats why I say they are different products/use cases but they work well in concert. They just dont work well combined together like bing and google have been doing.
edit: I guess lemmy escapes certain characters and it ate my post.
ChatGPT is not a search engine. It takes random shit from the Internet and stitches it together. It can often get things wrong in my experience. It's best to always fact check.
Keyword searches worked fine and pulled up exactly what I wanted for years, I swear to god. Somewhere in the last decade though websites have gamed the system and now I can't find anything no matter how I word my search. It's depressing.
I prefer that stack looks the same as it did way back when. And stack is usually where i find my answers.
I use ChatGPT every day too. Because Google is being such a shit about YouTube I am in the process of moving away from Google altogether. I use DuckDuckGo for search, which indirectly uses Bing. It's mostly OK. Sometimes I'm forced to try Google, it usually doesn't help. But for programming, yeah, StackOverflow feels downright regressive now.
I'm honestly kind of surprised about this news, considering how horrible Google's results are now.