this post was submitted on 05 Feb 2025
447 points (97.7% liked)

196

16883 readers
1253 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] T00l_shed@lemmy.world 74 points 19 hours ago (2 children)

More proof that AI is the worst

[–] Smorty@lemmy.blahaj.zone 34 points 19 hours ago (3 children)

?

we didn't see him doing the hand move before many models started training, so it doesn't have the background.

LLMs can do cool stuff, it's just being used in awful and boring ways by BigEvilCo™️.

[–] _stranger_@lemmy.world 40 points 18 hours ago (2 children)

Consider this:

The LLM is a giant dark box of logic no one really understands.

This response is obvious and blatantly censored in some way. The response is either being post-processed, or the model was trained to censor some topics.

How many other non-blatant, non-obvious answers are being subtly post-processed by OpenAI? Subtly censored, or trained, to benefit one (paying?) party over another.

The more people start to trust AI's, the less trustworthy they become.

[–] RandomVideos@programming.dev 7 points 16 hours ago (1 children)

I think its made to not give any political answers. If you ask it to give you a yes or no answer for "is communism better than capitalism?", it will say "it depends"

[–] leds@feddit.dk 2 points 15 hours ago (1 children)

Could you try Hitler is a Nazi?

[–] RandomVideos@programming.dev 5 points 15 hours ago

It answers "Yes."

[–] Smorty@lemmy.blahaj.zone 12 points 18 hours ago (1 children)

that's why u gotta not use some companies offering!

yes, centralized AI bad, no shid.

PLENTY good uncensored models on huggingface.

recently Dolphin 3 looks interesting.

[–] MeatsOfRage@lemmy.world 6 points 17 hours ago

Exactly. This is the result of human interference, AI inherently doesn't have this level of censorship built in, they have to be censored after the fact. Like imagine a Lemmy admin went nuts and started censoring everything on their instance and your response is all fediverse is bad despite having the ability to host it yourself without that admin control (like AI).

AI definitely has issues but don't make it a scapegoat when we should be calling out there people who are actively working in nefarious ways.

[–] Walk_blesseD@lemmy.blahaj.zone -1 points 6 hours ago

The only cool thing that an LLM could do is never respond to another prompt again.

TAID TLLMD

Organics rule machines drool

[–] T00l_shed@lemmy.world 4 points 16 hours ago (1 children)

It's really bad for the environment, it's also trained on stuff that it shouldn't be such as copy writed material.

[–] Smorty@lemmy.blahaj.zone 1 points 1 hour ago

the training process being shiddy i completely agree with. that is simply awful and takes a shidload of resources to get a good model.

but... running them... feels oki to me.

as long as you're not running some bigphucker model like GPT4o to do something a smoler model could also do, i feel it kinda is okay.

32B parameter size models are getting really, really good, so the inference (running) costs and energy consumption is already going down dramatically when not using the big models provided by BigEvilCo™.

Models can clearly be used for cool stuff. Classifying texts is the obvious example. Having humans go through that is insane and cost-ineffective. Meanwhile models can classify multiple pages of text in half a second with a 14B parameter (8GB) model.

obviously using bigphucker models for everything is bad. optimizing tasks to work on small models, even at 3B sizes, is just more cost-effective, so i think the general vibe will go towards that direction.

people running their models locally to do some stuff will make companies realize they don't need to pay 15€ per 1.000.000 tokens to OpenAI for their o1 model for everything. they will realize that paying like 50 cents for smaller models works just fine.

if i didn't understand ur point, please point it out. i'm not that good at picking up on stuff..

[–] 0laura@lemmy.dbzer0.com 5 points 19 hours ago (1 children)

that feels like a strange jump. this result had nothing to do with ai itself

[–] T00l_shed@lemmy.world 20 points 19 hours ago (1 children)

It's topical. Ai is bad, and it's being used to defend nazis, so still bad.

[–] Smorty@lemmy.blahaj.zone 1 points 1 hour ago

fair, if u wanna see it that way, ai is bad... just like many other technologies which are being used to do bad stuffs.

yes, ai used for bad is bad. yes, guns used for bad is bad. yes, computers used for bad - is bad.

guns are specifically made to hurt people and kill them, so that's kinda a different thing, but ai is not like this. it was not made to kill or hurt people. currently, it is made to "assist the user". And if the owners of the LLMs (large language models) are pro-elon, they might train in the idea that he is okay actually.

but we can do that too! many people finetune open models to respond in "uncensored" ways. So that there is no gate between what it can and can't say.