this post was submitted on 20 May 2025
315 points (92.9% liked)

Fuck AI

2825 readers
1287 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] glimse@lemmy.world 2 points 12 hours ago (2 children)

Not a big AI guy but the last line is dumb as hell. LLMs can be insanely useful when used by the right people.

Should have guessed it'd be a bad take by the "friendly reminder" opener but they clearly don't see LLMs as a tool, they see it as the end product which is just ignorant.

[–] pemptago@lemmy.ml 4 points 8 hours ago (1 children)

Criticisms of unethically built models can't help but mention we're making these tradeoffs for generally crappy returns. A common counter argument I see now is this focus on a small dig while ignoring all other points. I also see this effort to distance while defending. You might not big a "big" ai guy, but showing up to say it can be useful while overlooking valid points tells me you're a regular ai guy.

[–] glimse@lemmy.world -1 points 5 hours ago (1 children)

OP explained what the acronym meant so was I wrong to assume they meant the entire technology and not just chatgpt and grok? Ethically and unethically built models are both LLMs and to shit on it a whole because of the bad ones is hilariously ignorant

[–] pemptago@lemmy.ml -1 points 4 hours ago* (last edited 4 hours ago) (1 children)

was I wrong to assume they meant the entire technology and not just chatgpt and grok?

yes.

That's all the time I have for sealion questions.

[–] glimse@lemmy.world 1 points 3 hours ago

Have fun with your hate boner

[–] finitebanjo@lemmy.world 0 points 9 hours ago (2 children)

What do you think they are useful for? Be aware I'm going to argue against any answer you give with fervor.

[–] Empyreus@lemmy.world 0 points 1 hour ago (1 children)

There's so many casual examples that LLMs excel at. Learning a second language? Having something that can break down context, provide examples, or have practice conversations with is incredibly helpful and easy with LLMs. There's an endless amount of little things it makes easier and is great at: planning a trip and want a quick itinerary suggestion? Need help running a Dungeons and Dragons campaign? Want something to help you summarize a topic or plan you a basic learning on a topic? There's so many valid helpful uses where is faster or better than current options.

[–] finitebanjo@lemmy.world 1 points 21 minutes ago* (last edited 20 minutes ago)

It hallucinates at a percentage that makes it completely unusuable for all of those tasks. If it's strictly inferior to non-LLM solutions for all of these problems then clearly you're better off not using it at all.

You can search up an itinerary for popular tourism locations.

There are platforms, free or paid, that teach you a second language instead of making random shit up.

There are countless DnD campaigns you can find online or tools to make planning them easy.

You can learn that 2+2=4 and not 5, or logic puzzles which are variations of common ones that ChatGPT are incapable of parsing due to its statistical nature, for free from sites like Khan Academy.

ChatGPT is shit, mate. It has no concept of anything, it just generates the next token in a chain of tokens until it produces some garbage which roughly approximates an answer. Why not just get an actual answer 100% accurate to human output from a real person?

[–] glimse@lemmy.world 1 points 9 hours ago (1 children)

I want to be clear I'm not talking about the layman here (though I hear chatgpt is pretty good at creating quizzes based on notes you give it) - actual scientific work is being done with the help of LLMs

A concrete example of this would be www.OpenCatalystProject.com or IBM using it to discover a new COVID drug.

I'd bring up all the machine learning breakthroughs - of which there are likely hundreds - but I'd imagine you'd skewer me as they're not LANGUAGE models (which is fair as I said LLM, not ML).

What you won't hear me defending AI marketed to the masses. Pretty much any value it provides is offset by the things mentioned in the OP. But for science? Hell yeah keep up the good work

[–] finitebanjo@lemmy.world 6 points 9 hours ago* (last edited 9 hours ago) (2 children)

You're right those arent fucking LLMs, stick with the program. Everybody else in here is talking about one specific thing and its not research oriented machine learning algorithms. It's bullshit generators.

[–] KeenFlame@feddit.nu 2 points 6 hours ago

This is the exact same technology, as if using semantic reasoning will make your argument any stronger

[–] glimse@lemmy.world 0 points 9 hours ago (1 children)

You were supposed to argue with fervor, not make stuff up..

You're wrong, they both use LLMs.

[–] finitebanjo@lemmy.world 0 points 8 hours ago (1 children)

Any research using LLM, not on it, is publishing bullshit.

[–] glimse@lemmy.world 1 points 8 hours ago (1 children)

Why double down on being wrong? My two examples aren't publishing bullshit.

If OP was only talking about chatgpt and the like, maybe they should have said that instead of lumping all LLMs together??

Either way I think we're done here, a shame you never actually argued with fervor

[–] finitebanjo@lemmy.world 0 points 8 hours ago* (last edited 8 hours ago) (1 children)

Fine then:

  1. IBM - Not an LLM

  2. Meta Open Catalyst - Not an LLM

In fact the Open Catalyst in the paper specifically compares it's model to LLMs in that both different models improved with larger datasets (and increased processing power).

Eat shit

[–] glimse@lemmy.world 0 points 8 hours ago
  1. IBM DeepSearch. But you're half right, the drug I was thinking of was BenevolentAI...using an LLM similar to IBM.

  2. CatBERTa

But nice try. Eat shit, I guess