this post was submitted on 10 May 2025
241 points (96.9% liked)

Fuck AI

2720 readers
1637 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
all 32 comments
sorted by: hot top controversial new old
[–] wondrous_strange@lemmy.world 8 points 1 day ago

They hate me regardless of llm

[–] Gradually_Adjusting@lemmy.world 21 points 1 day ago (1 children)

A good 20% of my weekly workload is writing summaries of reports that I know nobody reads.

I haven't had a raise in years. They say there no room in the budget but won't entertain any solutions that cut costs.

So I gave myself a raise while I'm looking for other opportunities.

Mine is not a usual situation. I don't recommend LLMs for work that matters in the least. I'm just killing some mindless busywork while people wonder why my brain hasn't melted from it yet.

[–] jj4211@lemmy.world 5 points 7 hours ago* (last edited 4 hours ago) (1 children)

I'll probably not read the summary you wrote of the report I also probably wouldn't read, so I really don't care about your use of LLM because that's fine. You have a soul crushingly stupid job responsibility and I wish you well in your efforts to find better.

What I can't stand are: Someone had something to convey that could have been a sentence, but had to make it "professionally" long and used an LLM to drag it out. This isn't new, but it's more common now thanks to LLM making it effortless.

Someone who refuses to answer "I don't know" to a question, but act like they do know instead, particularly using LLMs to fake it nowadays. I could have asked the LLM myself if that would have worked. I've seen this exchange too much:

  • "I'm having an issue with X"
  • "<suspiciously verbose answer that sounds like it could be relevant, but has nothing at all to do with X>"
  • "Uhhh, that was a bunch of unhelpful irrelevant nonsense, let me rephrase X in case you misunderstood"
  • "Oh if it was unhelpful, that wasn't my fault, I was using ChatGPT" They try to get by faking it with LLM, then blame the LLM for any mistakes. Yes, it is your fault, you used the LLM and you tried not to disclose it because you wanted to take credit.
[–] Gradually_Adjusting@lemmy.world 3 points 5 hours ago

Genuinely I would ask for their resignation if it was in my power to fire them

[–] TootSweet@lemmy.world 37 points 1 day ago (2 children)

The new CTO at my workplace just the other day announced they'd partnered with Google to get Gemini Code Complete and they'd be piloting its use on a particular project. They had someone from Google present all the "benefits" of using AI for writing code and everything.

Most of my team is facepalming so hard. Except the one guy who's an AI enthusiast. (That guy is also a massive conspiracy theorist.) I'm not sure about everyone's sentiment, though. I'm kindof sortof "in charge" on my team, and apparently the company's stance is now that AI use is acceptable. If I told folks not to use AI, I don't think the business would back me, so I'm having to be "diplomatic" about it.

So, I've told the team "just... don't push any code to the central repo without understanding it at least as well as if you'd written it yourself."

But yeah. I'm pretty pissed. Hopefully the CTO isn't high enough on the smell of his own farts to decide that "pilot" project is an unmitigated success despite all evidence to the contrary.

[–] XTL@sopuli.xyz 5 points 1 day ago (1 children)

So, they're ok with sending all your (customers') code to Google. I'm mildly positively surprised my company isn't.

[–] TootSweet@lemmy.world 4 points 1 day ago

Yeah, it's pretty weird. My employer is a traditionally brick-and-mortor sort of business that's only recently starting to learn that the internet isn't a "fad". My employer's policy has always been that we don't use "the cloud" specifically so we can keep our trade secrets secret. They're only now starting to approach the idea of running our code on off-premisis hardware our company doesn't own. They're using Google Cloud rather than AWS, though, because "Amazon is a competitor" and they're paranoid that Amazon is going to steal our trade secrets. Which... doesn't make sense because so is Google. (Of course, that doesn't really give away what industry my employer is in because both Google and Amazon do basically everything.) Google pinky swears they'll never steal/use/share our data. And of course that might be true right up until the moment they decide to change that and meanwhile we're screwed because we're all vendor-lockin'd with them.

Anyway, yeah. My employer is nincompoops. Lol.

[–] _____@lemm.ee 13 points 1 day ago

They'll just send the code to an AI text gen to "summarize" what it does and put that in the PR body.

You have to understand most of these devs don't give a shit (in a based kind of way. fuck work) At the same time it comes to a detriment because they will use code gens so they don't have to do any work. and then it bites you in the ass because you're the one reviewing.

[–] dedales@slrpnk.net 55 points 2 days ago* (last edited 2 days ago)

Are we solving AI slop with peer pressure ??

[–] AI_toothbrush@lemmy.zip 12 points 1 day ago* (last edited 1 day ago)

Classmates too... i know im not gonna use ai and that the people using ai are harming themselves but its still shit to see how little work they have to put into stuff while im writing essays for hours. But the results do show, my teachers are pretty dissapointed in the writing/speech skills of my classmates while im doing pretty well(as well as most other classmates who dont use ai). I could very clearly see how ai could lead to cognitive decline in certain metrics.

[–] racketlauncher831@lemmy.ml 23 points 1 day ago

I'll tell my coworks to gfts if they submit AI code for me to review. If you can generate AI code, I can also generate AI code, and why do I want to review your code instead of my own? Why not, instead of me reviewing your code, I take your task and throw twenty AI output at you, and you tell me which one of them actually works?

[–] henfredemars@infosec.pub 28 points 2 days ago (3 children)

I use slop for our review process because the management already decides what the results are before we even begin. Is that an acceptable use case? Ijust need filler text that has no impact on anything.

[–] jj4211@lemmy.world 2 points 6 hours ago

Depends on if I am likely going to think I should read it or not.

I have some automation for some data fields that I know no one reads, that basically says "See general description" rather than trying to fill out the fields as directed. It's a scenario where there's like 4 subtly different "description" fields that are all mandatory and I just write up the description once and redirect everyone to the one field.

[–] laserm@lemmy.world 17 points 1 day ago

I personally don't think it matters that much, but if you wanna avoid using generative AI, use a Lorem Ipsum or a Markov chain.

[–] Emptiness@lemmy.world 6 points 1 day ago (1 children)

"Here are the technical points I am going to implement in my IT service area and in the tactical order they are going to be done. Please dumb this down for me so that a management group can understand it and approve it."

I don't have time to "explain it like you're five", I have real work to do. Judge me all you like.

[–] jj4211@lemmy.world 2 points 6 hours ago

I'll try to see if the management is secretly as disinterested in the filler as I am and have had some success in "here's a summary, you can let me sweat the details". Occasionally I hit leadership that is so insecure they demand wall of text and I could understand resorting in LLM as wall of text generator if all other options have been exhausted.

[–] Eyekaytee@aussie.zone 0 points 1 day ago

i don’t understand what was the ai they were using that made others so unhappy

[–] LuxSpark@lemmy.cafe -5 points 1 day ago* (last edited 1 day ago) (2 children)

If I can use AI to do my boring work or learn something new, then I dgaf what coworkers think.

[–] JandroDelSol@lemmy.world 3 points 7 hours ago (1 children)

AI hallucinates constantly, you aren't learning anything

[–] jj4211@lemmy.world 1 points 6 hours ago

Well, it can help someone gain knowledge that is widespread knowledge to everyone else, but you have to constantly be wary that it will screw up harder than an old stackoverflow answer... It can deal with basic questions like "How do you combine an array of strings in language X?" I think an LLM can answer (of course, so too would 'I'm feeling lucky" google search).

[–] jj4211@lemmy.world 1 points 6 hours ago

Whatever LLM you do for yourself or for stupid fluff data that no one needs but bureaucracy demands, sure. You inflict that output on me in ways that inconvenience me, then I'm going to be frustrated with you. If I wanted LLM output, then I could get it myself and likely tried before asking.