this post was submitted on 13 May 2025
449 points (100.0% liked)

TechTakes

1858 readers
373 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TheObviousSolution@lemm.ee 19 points 1 day ago* (last edited 1 day ago) (1 children)

Had a presentation where they told us they were going to show us how AI can automate project creation. In the demo, after several attempts at using different prompts, failing and trying to fix it manually, they gave up.

I don't think it's entirely useless as it is, it's just that people have created a hammer they know gives something useful and have stuck it with iterative improvements that have a lot compensation beneath the engine. It's artificial because it is being developed to artificially fulfill prompts, which they do succeed at.

When people do develop true intelligence-on-demand, you'll know because you will lose your job, not simply have another tool at your disposal. The prompts and flow of conversations people pay to submit to the training is really helping advance the research into their replacements.

[–] brygphilomena@lemmy.dbzer0.com -1 points 1 day ago (3 children)

My opinion is it can be good when used narrowly.

Write a concise function that takes these inputs, does this, and outputs a dict with this information.

But so often it wants to be overly verbose. And it's not so smart as to understand much of the project for any meaningful length of time. So it will redo something that already exists. It will want to touch something that is used in multiple places without caring or knowing how it's used.

But it still takes someone to know how the puzzle pieces go together. To architect it and lay it out. To really know what the inputs and outputs need to be. If someone gives it free reign to do whatever, it'll just make slop.

[–] swlabr@awful.systems 21 points 1 day ago (1 children)

That’s the problem, isn’t it? If it can only maybe be good when used narrowly, what’s the point? If you’ve managed to corner a subproblem down to where an LLM can generate the code for it, you’ve already done 99% of the work. At that point you’re better off just coding it yourself. At that point, it’s not “good when used narrowly”, it’s useless.

[–] frezik@midwest.social 6 points 1 day ago* (last edited 1 day ago)

There's something similar going on with air traffic control. 90% of their job could be automated (and it has been technically feasible to do so for quite some time), but we do want humans to be able to step in when things suddenly get complicated. However, if they're not constantly practicing those skills, then they won't be any good when an emergency happens and the automation gets shut off.

The problem becomes one of squishy human psychology. Maybe you can automate 90% of the job, but you intentionally roll that down to 70% to give humans a safe practice space. But within that difference, when do you actually choose to give the human control?

It's a tough problem, and the benefits to solving it are obvious. Nobody has solved it for air traffic control, which is why there's no comprehensive ATC automation package out there. I don't know that we can solve it for programmers, either.

[–] froztbyte@awful.systems 6 points 1 day ago

My opinion is it can be good when used narrowly.

ah, as narrowly as I intend to regard your opinion? got it