this post was submitted on 23 May 2025
42 points (100.0% liked)

TechTakes

1871 readers
292 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Take that, Saltman! Bet you never thought it was possible!

top 10 comments
sorted by: hot top controversial new old
[–] luciole@beehaw.org 10 points 5 hours ago

yo dawg I heard you like bullshit so we put some deliberate bullshit in your accidental bullshit

[–] trungulox@lemm.ee 10 points 5 hours ago

Dude. I will never buy anything from and ad You fucks have been trying for 30+ years and not once had it worked.

Know what I would pay you for? A functional fucking search engine you fucking dirty myopic shit cabbages.

Fuck the whole fucking lot of you

[–] sweetgemberry@lemmy.blahaj.zone 8 points 5 hours ago (1 children)

Money is so incredibly devoid of creativity it hurts. They see it as nothing more than an extension of the ever present algorithm. It's so fucking sad. I get that a lot of money is spent on making these systems but why can't it be for anything more than printing more money. (Yes capitalism, comrades, I get it)

I'm sleep deprived and have only thought about it for 10 minutes but here's some ideas off the top of my head..

  1. Use AI to assist in reporting the disgusting content that's pumped out for kids on YouTube. You don't want to pay people to review every video so at least get it to report suspicious content.

  2. Train it to spot the signs of fraud to help prevent people falling for scams or identity theft

  3. Specialise it rather than some generalised system that spurts out bullshit. Actually give it roles. A purpose to exist. ChatGPT is not a personal assistant, it wasn't designed to be one. It cannot function reliably as one. If you want to sell it as that then make it do that reliably and nothing more.

  4. Train AI to assist in accessibility broadly across the internet. A page doesn't have TTS no problem our AI was made for this purpose. The images have no meta description? It's all good our AI will interpret what it sees for you. Unable to pass this captcha because of impairment? no worries our AI will deal with it.

  5. For fuck sake design it to want more than immediate satisfaction from the user. I don't want a one stop solution to an issue I want to discuss it and figure it out. Let it think for more than one second. Let it contemplate over the broader discussion rather than just my last response. Let it speak not simply respond. Let it speak honestly when it does not know an answer or is uncertain. It is merely glorified I/O in its current form. Useless for anything more than asking it which side to butter my toast (and 50% of the time it'd confidently declare that I should butter the crust). If you must burn the climate and doom our children in order to operate a b-movie skynet then please at least let it have the decency and respect to tell me it's as ignorant as the apes that funded its existence.

[–] atrielienz@lemmy.world 4 points 5 hours ago

Some of the things you brought up Google does appear to be doing with their AI. Unfortunately I think this is a damned if you do damned if you don't scenario. AI (especially integrated with a security nightmare system like IoT) isn't trustworthy in it's own right, but becomes a security nightmare of epic proportions when you realize that most IoT devices that connect to the Internet don't even have rudimentary security protocols. When you add the fact that Gemini is trying to do IoT and at the same time do things like call screening, text recognition etc (for the purposes of spam removal and scam mitigation), and that to do so it has to collect and monitor your data to do it, you recognize why so many security people are distrustful.

Google says they don't collect that data. That the processing is done ok device rather than requiring to be sent back to Google for processing. They say that this data won't be used to further train the AI. People don't trust it

[–] Soyweiser@awful.systems 6 points 5 hours ago (1 children)
[–] GeeDubHayduke@lemmy.dbzer0.com 6 points 5 hours ago* (last edited 5 hours ago)

To fuck up everything, including passing butter.

[–] Luffy879@lemmy.ml 5 points 6 hours ago

I wonder how many returns there will be because the AI imagined something

[–] h_ramus@lemm.ee 25 points 10 hours ago (1 children)

Ads. It's always ads. The festering pile of digital excrement that refuses to be cleaned up. It reproduces instead.

[–] swlabr@awful.systems 7 points 6 hours ago

Just remember, it can, and will, always get worse!

[–] Archangel1313@lemm.ee 9 points 10 hours ago

The internet just gets worse every day.