this post was submitted on 17 Mar 2025
1330 points (99.7% liked)
Programmer Humor
34442 readers
299 users here now
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.
Plenty of good programmers use AI extensively while working. Me included.
Mostly as an advance autocomplete, template builder or documentation parser.
You obviously need to be good at it so you can see at a glance if the written code is good or if it's bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.
Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.
I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.
There is an exception to this I think. I don't make ai write much, but it is convenient to give it a simple Java class and say "write a tostring" and have it spit out something usable.
That's why you use unit test and integration test.
I can write bad code myself or copy bad code from who-knows where. It's not something introduced by LLM.
Remember famous Linus letter? "You code this function without understanding it and thus you code is shit".
As I said, just a tool like many other before it.
I use it as a regular practice while coding. And to be true, reading my code after that I could not distinguish what parts where LLM and what parts I wrote fully by myself, and, to be honest, I don't think anyone would be able to tell the difference.
It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.
I may come back with a particular piece of code that I specifically remember to be an output from deepseek, and probably withing the whole context it would be indistinguishable.
Also, not all LLM usage is for copying from it. Many times you copy to it and ask the thing yo explain it to you, or ask general questions. For instance, to seek for specific functions in C# extensive libraries.
So no change to how it was before then
Different shit, same smell
Depending on what it is you're trying to make, it can actually be helpful as one of many components to help get your feet wet. The same way modding games can be a path to learning a lot by fiddling with something that's complete, getting suggestions from an LLM that's been trained on a bunch of relevant tutorials can give you enough context to get started. It will definitely hallucinate, and figuring out when it's full of shit is part of the exercise.
It's like mid-way between rote following tutorials, modding, and asking for help in support channels. It isn't as rigid as the available tutorials, and though it's prone to hallucination and not as knowledgeable as support channel regulars, it's also a lot more patient in many cases and doesn't have its own life that it needs to go live.
Decent learning tool if you're ready to check what it's doing step by step, look for inefficiencies and mistakes, and not blindly believe everything it says. Just copying and pasting while learning nothing and assuming it'll work, though? That's not going to go well at all.
It'll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it'll be way better than now.
That's certainly one theory, but as we are largely out of training data there's not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.
Just generate the training material, duh.
DeepSeek
This is certainly the pattern that is actively emerging.
I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.
"more breakthroughs" spoken like we get these once everyday like milk delivery.
I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.
None of it's perfect, but a lot of it's fuckin' spooky, and any form of "well it can't do [blank]" has a half-life.
Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.
Dipshits going "I made this!" is not indicative of what this makes possible.
I kid you not, I took ML back in 2014 as a extra semester in my undergrad. The complaints then were the same as complaints now: too much power requirement, too many false positives. The latter of the two has evolved into hallucinations.
If normal people going "I made this!" is not convincing enough that it is easily identified then who is this going to replace? you still need the right expert right? all it creates is more work for experts to come and fix broken AI output.
Despite results improving at an insane rate, very recently. And you think this is proof of a problem with... the results? Not the complaints?
People went "I made this!" with fucking Terragen. A program that renders wild alien landscapes which became generic after about the fifth one you saw. The problem there is not expertise. It's immense quantity for zero effort. None of that proves CGI in general is worthless non-art. It's just shifting what the computer will do for free.
At some point, we will take it for granted that text-to-speech can do an admirable job reading out whatever. It'll be a button you push when you're busy sometimes. The dipshits mass-uploading that for popular articles, over stock footage, will be as relevant as people posting seven thousand alien sunsets.
the results do keep improving of course. But it's not some silver bullet. Yes, your enthusiasm is warranted.. but you peddle it like the 2nd coming of christ which I don't like encouraging.
I've done no such thing.
I called it half-decent, spooky, and admirable.
That turns out to be good enough, for a bunch of applications. Even the parts that are just a chatbot fooling people are useful. And massively better than the era you're comparing this to.
We have to deal with this honestly. Neural networks have officially caught on, and anything with examples can be approximated. Anything. The hard part is reminding people what "approximated" means. Being wrong sometimes is normal. Humans are wrong about all kinds of stuff. But for some reason, people think computers bring unflinching perfection - and approach life-or-death scenarios with this sloppy magic.
Personally I'm excited for position tracking with accelerometers. Naively integrating into velocity and location immediately sends you to outer space. Clever filtering almost sorta kinda works. But it's a complex noisy problem, with a minimal output, where approximate answers get partial credit. So long as it's tuned for walking around versus riding a missile, it should Just Work.
Similarly restrained use-cases will do minor witchcraft on a pittance of electricity. It's not like matrix math is hard, for computers. LLMs just try to do as much of it as possible.
If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.
You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.
The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.
We don't need leaps and bounds, from here. We're already in science fiction territory. Incremental improvement has has silenced a wide variety of naysaying.
And this is with LLMs - which are stupid. We didn't design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that'll fake its way through explaining why the answer is yes or no. If we're only interested in the accuracy of that answer, then we're wasting effort on the quality of the faking.
Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between "but right now it sucks it [blank]" and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.
I’m not saying they don’t have applications. But the idea of them being a one size fits all solution to everything is something being sold to VC investors and shareholders.
As you say - the issue is accuracy. And, as you also say - that’s not what these things do, and instead they make predictions about what comes next and present that confidently. Hallucinations aren’t errors, they’re what they were built to do.
If you want something which can set an alarm for you or find search results then something that responds to set inputs correctly 100% of the time is better than something more natural-seeming which is right 99%of the time.
Maybe along the line there will be a new approach, but what is currently branded as AI is never going to be what it’s being sold as.
That's your interpretation.
that's reality. Unless you're too deluded to think it's magic.
No i meant to say you're interpretation of what I said.
Everything possible in theory. Doesn't mean everything happened or just about to happen
My hobby: extrapolating.
To get better it would need better training data. However there are always more junior devs creating bad training data, than senior devs who create slightly better training data.
And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.