this post was submitted on 05 Jun 2025
648 points (97.4% liked)
People Twitter
7223 readers
1491 users here now
People tweeting stuff. We allow tweets from anyone.
RULES:
- Mark NSFW content.
- No doxxing people.
- Must be a pic of the tweet or similar. No direct links to the tweet.
- No bullying or international politcs
- Be excellent to each other.
- Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
it only takes a couple times of getting a made-up bullshit answer from chatgpt to learn your lesson of just skip asking chatgpt anything altogether
My girlfriend gave me a mini heart attack when she told me that my favorite band broke up. Turns out it was chat gpt making shit up, came up with a random name for the final album too.
But chatgpt always gives such great answers on topics I know nothing at all about!
Oh yeah, AI can easily replace all the jobs I don't understand too!
Gell-mann amnesia. Might have to invent a special name for the AI flavour of it.
I stopped using it when I asked who I was and then it said I was a prolific author then proceeded to name various books I absolutely did not write.
I just read "The Autobiography of QueenHawlSera"!
Have I been duped?
Why the fuck would it know who you are?
If you have an account, you can tell it things about yourself. I used my boss's account for a project at work (felt gross). I made the mistake of saying "good morning" to it one day, and it proceeded to ask me if I was going to do (activities related to my boss's personal life - and the details were accurate). I was thinking, "why does he tell it so much about himself?"
So it's working as intended.
and I'm apparently a famous Tiktoker and Youtuber.
I feel like a lot of people in this community underestimate the average person's willingness to trust an AI. Over the past few months, every time I've seen a coworker ask something and search it up, I have never seen them click on a website to view the answer. They'll always take what the AI summary tells them at face value
Which is very scary
I was using it to blow through an online math course I'd ultimately decided I didn't need but didn't want to drop. One step of a problem I had it solve involved finding the square root of something; it spat out a number that was kind of close, but functionally unusable. I told it it made a mistake three times and it gave a different number each time. When I finally gave it the right answer and asked, "are you running a calculation or just making up a number" it said that if I logged in, it would use real time calculations. Logged in on a different device, asked the same question, it again made up a number, but when I pointed it out, it corrected itself on the first try. Very janky.
ChatGPT doesn't actually do calculations. It can generate code that will actually calculate the answer, or provide a formula, but ChatGPT cannot do math.
It's just like me fr fr
So it forced you to ask it many times? Now imagine that you paid for it each time. For the creator then, mission fucking accomplished.
You need multi-shot prompting when it comes to math. Either the motherfucker gets it right, or you will not be able to course correct it in a lot of cases. When a token is in the context, it's in the context and you're fucked.
Alternatively you could edit the context, correct the parameters and then run it again.
On the other side of the shit aisle
Shoutout to my man Mistral Small 24B who is so insecure, it will talk itself out of correct answers. It's so much like me in not having any self worth or confidence.
I've only really found it useful when you provide the source of information/data to your prompt. E.g. say you want to convert one data format to another like table data into JSON
It works very consistently in those types of use cases. Otherwise it's a dice roll.
That's what people get when they ask me questions too but they still bother me all the time so clearly that's not going to work.