this post was submitted on 10 May 2025
241 points (96.9% liked)

Fuck AI

2720 readers
1686 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] jj4211@lemmy.world 5 points 10 hours ago* (last edited 7 hours ago) (1 children)

I'll probably not read the summary you wrote of the report I also probably wouldn't read, so I really don't care about your use of LLM because that's fine. You have a soul crushingly stupid job responsibility and I wish you well in your efforts to find better.

What I can't stand are: Someone had something to convey that could have been a sentence, but had to make it "professionally" long and used an LLM to drag it out. This isn't new, but it's more common now thanks to LLM making it effortless.

Someone who refuses to answer "I don't know" to a question, but act like they do know instead, particularly using LLMs to fake it nowadays. I could have asked the LLM myself if that would have worked. I've seen this exchange too much:

  • "I'm having an issue with X"
  • "<suspiciously verbose answer that sounds like it could be relevant, but has nothing at all to do with X>"
  • "Uhhh, that was a bunch of unhelpful irrelevant nonsense, let me rephrase X in case you misunderstood"
  • "Oh if it was unhelpful, that wasn't my fault, I was using ChatGPT" They try to get by faking it with LLM, then blame the LLM for any mistakes. Yes, it is your fault, you used the LLM and you tried not to disclose it because you wanted to take credit.

Genuinely I would ask for their resignation if it was in my power to fire them