this post was submitted on 07 May 2025
685 points (100.0% liked)

TechTakes

1834 readers
721 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] magnetosphere@fedia.io 117 points 1 day ago (7 children)

One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.

This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.

[–] wise_pancake@lemmy.ca 61 points 1 day ago* (last edited 1 day ago) (2 children)

Apple Intelligence and the first versions of Gemini are the perfect examples of this.

iOS still doesn’t do what was sold in the ads, almost a full year later.

Edit: also things like email summary don’t work, the email categories are awful, notification summaries are straight up unhinged, and I don’t think anyone asked for image playground.

[–] SomeoneSomewhere@lemmy.nz 48 points 1 day ago* (last edited 1 day ago) (1 children)

Insert 'Full Self Driving' Here.

Also, outlook's auto alt text function told me that a conveyor belt was a picture of someone's screen today.

[–] magnetosphere@fedia.io 14 points 1 day ago

Calling it “Full Self Driving” is such blatant false advertising.

[–] Buelldozer@lemmy.today 12 points 1 day ago

Apple Intelligence and the first versions of Gemini are the perfect examples of this.

Add Amazon's Alexa+ to that list. It's nearly a year overdue and still nowhere in sight.

[–] swlabr@awful.systems 47 points 1 day ago

capitalism working against itself

More like: capitalism reaching its own logical conclusion

[–] UltraGiGaGigantic@lemmy.ml 7 points 1 day ago (2 children)

The battle is easy. Buy out and collude with the competition so the customer has no choice but to purchase a AI device.

[–] MonkderVierte@lemmy.ml 2 points 1 day ago* (last edited 1 day ago)

Ah, like with the TPM blackbox?

[–] sexy_peach@feddit.org 2 points 1 day ago

This would only work for a service that customers want or need

[–] luciole@beehaw.org 19 points 1 day ago (1 children)

I’m making a generous assumption by suggesting that “ready” is even possible

To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.

[–] Soyweiser@awful.systems 18 points 1 day ago (2 children)

There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn't going to be possible to get where they want to go.

[–] dgerard@awful.systems 14 points 1 day ago

OpenAI admitted that with o1! they included graphs directly showing gains taking exponential effort

[–] Sidyctism2@discuss.tchncs.de -1 points 18 hours ago (1 children)

do you have any articles on this? i have heard this claim quite a few times, but im wondering how they put numbers on the capabilities of those models.

[–] Soyweiser@awful.systems 1 points 3 hours ago

Sorry nope didnt keep a link.

[–] spankmonkey@lemmy.world 24 points 1 day ago

(I’m making a generous assumption by suggesting that “ready” is even possible)

It was ready for some specific purposes but it is being jammed into everything. The problem is they are marketing it as AGI when it is still at the random fun but not expected to be accurate phase.

The current marketing for AI won't apply to anything that meets the marketing in the foreseeable future. The desired complexity isn't going to exist in silicone at a reasonable scale.

[–] Jimmycakes@lemmy.world 17 points 1 day ago* (last edited 1 day ago)

Yeah but first to market is sooooo good for stock price. Then you can sell at the top and gtfo before people find out it's trash

[–] calcopiritus@lemmy.world 7 points 1 day ago (1 children)

I they didn't over promise, they wouldn't have had mountain loads of money to burn, so they wouldn't have advanced the technology as much.

Tech giants can't wait decades until the technology is ready, they want their VC money now.

[–] sexy_peach@feddit.org 3 points 1 day ago (1 children)

Sure, but if the tech in the end doesn't deliver it's all that money burnt.

If it does deliver it's still oligarchs deciding what tech we get.

[–] calcopiritus@lemmy.world 3 points 22 hours ago

Yes. The ones that have power are the ones that decide. And oligarchs by definition have a lot of power.