this post was submitted on 13 Jun 2025
98 points (100.0% liked)

SneerClub

1124 readers
61 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

jesus this is gross man

you are viewing a single comment's thread
view the rest of the comments
[–] self@awful.systems 22 points 2 days ago (18 children)

centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

schizoposting

fuck off with this

even if its wise imo to try not to be abusive to AI’s just incase

describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?

[–] swlabr@awful.systems 11 points 2 days ago (6 children)

Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.

[–] Architeuthis@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

Children really shouldn't be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right, but I'm guessing you just mean to forego I have kidnapped your favorite hamster and will kill it slowly unless you make that div stop overflowing on resize type prompts.

[–] swlabr@awful.systems 3 points 1 day ago

Children really shouldn’t be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right

I agree! I'm more thinking of the case where a kid might overhear what they think is a phone call when it's actually someone being mean to Siri or whatever. I mean, there are more options than "be nice to digital entities" if we're trying to teach children to be good humans, don't get me wrong. I don't give a shit about the non-feelings of the LLMs.

load more comments (4 replies)
load more comments (15 replies)