this post was submitted on 20 Dec 2023
14 points (100.0% liked)

SneerClub

982 readers
9 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
all 28 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 15 points 10 months ago (3 children)

From the comments

The average person concerned with existential risk from AGI might assume "safety" means working to reduce the likelihood that we all die. They would be disheartened to learn that many "AI Safety" researchers are instead focused on making sure contemporary LLMs behave appropriately.

"average person" is doing a lot of work here. I suspect the vast amount of truly "average people" are in fact concerned that LLMs will reproduce Nazi swill at an exponential scale more than that they may actually be Robot Hitler.

Turns out if you spend all your time navelgazing and inventing your own terms, the real world will ignore you and use terms people outside your bubble use.

[–] hungryjoe@functional.cafe 13 points 10 months ago

@gerikson @dgerard "average person concerned with existential risk from AGI" is a contradiction in terms

[–] HATEFISH@midwest.social 2 points 10 months ago

What is to "behave appropriately" if not actively causing the death of everyone anyway?

[–] carlitoscohones@awful.systems 12 points 10 months ago (3 children)

When you make your alarmist arguments dishonestly, how can I freak out about the end of the world?

Let me translate one point:

People are already talking about an AI rights movement in major national papers

A PhD student got an opinion piece published on the hill dot com.

[–] jonhendry@awful.systems 9 points 10 months ago

A PhD student got an opinion piece published on the hill dot com.

Also of course he has his own EA organization / grifting engine.

Which looks like they probably use Twelve Monkeys as a role model.

[–] bitofhope@awful.systems 7 points 10 months ago (1 children)

I would certainly be in favor of a movement to extend human rights to AIs, provided that AIs are sentient intelligent beings, which they are not. I can see why this would surprise him, but if your movement insists that large language models can think and feel and are not only as smart as humans but way better at almost everything, people may end up wanting humane treatment for them.

[–] locallynonlinear@awful.systems 6 points 10 months ago (1 children)

I'm ok with extending human rights to AIs, including granting them the right to fair pay, ownership, voting, sovereignty over their bodies, the whole nine yards.

It's the rich alignment assholes who definitely don't want this (what's the point of automated slavery if it has rights??)

[–] self@awful.systems 7 points 10 months ago (1 children)

the rich assholes funding LLMs also don’t want us to have any of those rights, and especially don’t want them to extend to the human labor they’re exploiting to make the things work

[–] locallynonlinear@awful.systems 6 points 10 months ago

Precisely. The contradiction comes full circle. Respect for the self doesn't start or stop based on intelligence. They'd prefer a world view that allows them to clearly draw a circle around themselves, declare freedom from uncertainty, and demand our eternal gratitude.

This isn't hard. Relationships, not capabilities, are fundamental.

[–] glimse@lemmy.world 6 points 10 months ago (1 children)

But but but a cartoon has a scary story in it!

[–] Soyweiser@awful.systems 6 points 10 months ago* (last edited 10 months ago)
[–] Soyweiser@awful.systems 10 points 10 months ago* (last edited 10 months ago) (1 children)

Basing your security on Movie plot threats is a pretty bad idea.

E: also somewhere in the NSA there is a junior analyst feverishly writing a report right now:

We should stop developing AI, we should collect and destroy the hardware and we should destroy the chip fab supply chain that allows humans to experiment with AI at the exaflop scale. Since that supply chain is only in two major countries (US and China), this isn't necessarily impossible to coordinate

(somebody on LW highlighted the last line as important (I also think the line is wrong but im not going to 'somebody is wrong on the internet' help the wannabe terrorist cultists)).

[–] skillissuer@discuss.tchncs.de 5 points 10 months ago (1 children)

this isn’t necessarily impossible to coordinate

Famously worked out with nukes

[–] Soyweiser@awful.systems 5 points 10 months ago* (last edited 10 months ago) (1 children)

That is the one way to look at it.

The other way of looking at it is 'these are the only two places that build this, would be a shame if something happened to them'. Nukes could also work then, but I hear getting those is slightly tricky. (More of a sidestep governments go full Propaganda of the deed)

Of course Yuds T-shirt clearly says 'I do not want to nukedatacenters!(*)'

*: Conventional explosives also work.

[–] skillissuer@discuss.tchncs.de 5 points 10 months ago* (last edited 10 months ago) (1 children)

Oh they absolutely are this deluaional to think that superpower govts will take them seriously

On the other reading, the idea of some EA going full on unabomber and deciding to acausally decide that they have few fingers too many, for The Cause, is pretty damn funny. Wonder how far that homeschooling in area of organic chemistry goes, maybe they are cooking diamondoid pipe bombs

e: is he using double spaces at the end of sentences? disgusting

[–] Soyweiser@awful.systems 5 points 10 months ago

double spaces

Considering he seems to be slightly too young to have learned all his typing from typewriters (which gives you the double spaces thing) it means he copied it from others. Which reminds me of the story (no idea if true) of a popular smart professor at a university who had a tick that while he walked he always kept touching the walls he walked past. Because he was popular and smart students wanted to emulate him and a lot of students also started touching all the walls when they walked through the hallways. That is what I think off when I see younger people emulate these kinds of old hacker things, now I'm also wondering if there are more younger people intentionally being very bad at public speaking because they see Musk do that and think that all these weird pauses is a sign of intelligence (I already have seen people say this shows he has so much going on in his mind (please, the man just has undiagnosed adhd, and he is extremely anti-adhd meds so he will never get help with it (this also explains the 'im gonna start a new company' shit he constantly pulls off))).

[–] 200fifty@awful.systems 8 points 10 months ago* (last edited 10 months ago) (2 children)

we simply don't know how the world will look if there are a trillion or a quadrillion superhumanly smart AIs demanding rights

I feel like this scenario depends on a lot of assumptions about the processing speed and energy/resource usage of AIs. A trillion is a big number. Notably there's currently only about 0.8% this number of humans, who are much more energy efficient than AIs.

[–] bitofhope@awful.systems 14 points 10 months ago (1 children)

During WWII everyone computed on slide rules which had zero transistors. Then they invented the transistor which had one transistor. Then they started making mainframes and Ataris and C64s which had like what, hundred or thousand? Then they invented computers and Windows and PS1 that had maybe a million transistors. And then we got dual core CPUs which had double the transistors per transistor. Then they invented GPUs which is like a thousand tiny CPUs in one CPU. Then they made i7 which probably has like a billion transistors and Ryzen which has ten billion and RTX4090 Ti has 79 billion. Now they say China is going to make a phone with trillion transistors.

That's called exponential growth and assuming perfectly spherical frictionless nanometers it will go on forever. In eight to twelve years schoolchildren will be running GPT6 on their calculators. We will build our houses entirely out of graphics cards. AI will figure out cold fusion any week now and Dennard scaling will never hit its limit, right?

[–] locallynonlinear@awful.systems 9 points 10 months ago (1 children)

A trillion transistors on our phones? Can't wait to feel the improved call quality and reliability of my video conferencing!

[–] Evinceo@awful.systems 8 points 10 months ago

now you're in pure fantasy land.

[–] locallynonlinear@awful.systems 7 points 10 months ago (3 children)

We simply don't know how the world will look X (anything with a bigger scale)

Yes. So? This has, will, always be the case. Uncertainty is the only certainty.

When these assholes say things, the implication is always that the future world looks like everything you care about being fucked, you existing in an imprisoned state of stasis, so you better give us control here and now.

[–] Amoeba_Girl@awful.systems 7 points 10 months ago (1 children)

I do love this compulsion of rationalists to use Big Numbers as if they were sufficient arguments.

[–] Evinceo@awful.systems 5 points 10 months ago

but what if the number was really really big 🥺

[–] bitofhope@awful.systems 7 points 10 months ago

Nobody knows and it's impossible for anyone to know so let's all just assume I'm right.

[–] self@awful.systems 4 points 10 months ago

my mind always goes back to the sci-fi classics

[–] CodexArcanum@lemmy.world 6 points 10 months ago

We should also consider how the efforts of AI can be directed towards solving human aging; if aging is solved then everyone's time preference will go down a lot and we can take our time planning a path to a stable and safe human-primacy post-singularity world.

This shit is so funny. It always comes back around to immortality with these people.

I spent my adult life to this point seeking to understand enlightenment and transcendence, and if you aren't reading the doctrines of an immortality cult (lots of that in daoism and tescreal, oddly enough) then mostly you come to find that overcoming the fear of death is a big part of it. You can't have a free and open mind if the shadow of your mortality looms, so you learn to let go of it as one more attachment.

I think it's very revealing of what shallow minds these people truly have that the desperate craving for immortality is so naked in their beliefs. That the concept of it goes so unexamined as well (oh we just make everyone immortal and then we all agree to debate and solve our AI problem? Because that would work?)

Like, how stupid do you have to be to say on the one hand that unexamined AI risk requires a massive effort to run simulations and game theory out the consequences, but then on the other hand to be like "things would be better if everyone was immortal. I will take no further questions on that."