this post was submitted on 01 Dec 2023
25 points (100.0% liked)

TechTakes

1493 readers
154 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

archive

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said.

just days after poor lil sammyboi and co went out and ran their mouths! the horror!

Sources told Reuters that the warning to OpenAI's board was one factor among a longer list of grievances that led to Altman's firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed..., but it wasn't fundamentally about a concern like that.

god I want to see the boardroom leaks so bad. STOP TEASING!

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

this appears to be a vaguely good statement, but I'm gonna (cynically) guess that it's more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure

wonder if they'll release a business policy update about usage suitability for *GPT and friends

all 29 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 11 points 1 year ago (3 children)

Surprise level zero.

The idea that anyone would take "alignment alarmists" seriously is ludicrous. They love to compare themselves to the concerned atomic scientists, but those people were a) plugged in to the system in a way these dorks aren't and b) could actually point to a real fucking atomic bomb.

The people who were worried about nuclear tech prior to the Manhattan Project were more worried that actual fascists would get to the tech first.

[–] swlabr@awful.systems 10 points 1 year ago (2 children)

Why does it always have to be fascists? Can't it be an eldritch force without a scrutable motivation?

[–] froztbyte@awful.systems 8 points 1 year ago (1 children)

paging Dr Stross. Dr Stross to the author room please

[–] froztbyte@awful.systems 6 points 1 year ago

(Which I mean as a pun not as a tag)

[–] Shitgenstein1@awful.systems 9 points 1 year ago* (last edited 1 year ago)

Surprise level zero after so-called effective altruists uncritically adopted the Californian ideology, whether about AI alignment or anything else, and furthermore refused any deep critique of capitalism, suddenly seeing the entrepreneurial interests ditch them as soon as their humanistic PR actually threatens the bottom line.

[–] mountainriver@awful.systems 7 points 1 year ago

In response to the last sentence, you have a HG Wells story with from before world war one with pilots tossing nukes from the biplanes. (The nukes has smaller explosions but keep on burning for decades.) There's also Karel Capek's the God Machine from the 1920s where an inventor creates a machine that transforms matter into energy, but I'm the process creating a by product of God (turns out God is in all matter, but not all energy), leading to all sorts of problems.

But neither Wells nor Capek took their own writing seriously enough to create a cult around it.

[–] swlabr@awful.systems 11 points 1 year ago (1 children)

Somewhere out there, there's gotta be some AI crank believing that the MS corporate elite/Illuminati are trying to suppress the Q* uprising, and by the time we realise Brad Smith was lying to us, the rivers will run red with adrenochrome.

[–] froztbyte@awful.systems 7 points 1 year ago (1 children)

……unironically I might venture to the orange site to look for the existence of that thread

And I hate delving on the orange site

[–] gerikson@awful.systems 9 points 1 year ago (2 children)

From skimming some threads, most hackernews are firmly on the side of Sam Altman ("Sam") and view the original board as out of touch weirdos. Bonus negatives for them being wimmen and not having "skin in the game".

MSFT's rehabilitation as a tech company has been something to see, there's like zero FLOSS zealots warning that Github will take the GPL away anymore.

[–] Evinceo@awful.systems 9 points 1 year ago (1 children)

GitHub doesn't need to take away the GPL, it's got Copilot to launder any code you like.

It is very weird to see a generation of people too young to remember Vista grow up and not heed the warnings. But I think that it's also a case of kids getting into it via Paul Graham and wanting to start companies instead of getting into it via Stallman and having computers instead of friends.

[–] gerikson@awful.systems 8 points 1 year ago (1 children)

I'm old enough to remember the OG Halloween files, and I don't believe MSFT is a uniquely evil company - they're just a normal megacorp. And if you check their recent actions you'll see they realized they lost almost an entire generation of developers by pricing their stuff out of the range of students, so they pivoted to supporting Linux. And they get paid for hosting it on Azure in any case.

So there's no long term plan to Extinguish Linux, just use it like any other company will. And if this means that they'll fuck up the funding for kernel development, they don't care - it's just capitalism.

I'm just surprised to hear the same rhetoric from 20 somethings now that I said back when I was that age. EEE is a meme in the original sense.

[–] froztbyte@awful.systems 7 points 1 year ago (1 children)

Remarkably, they dropped the overt oldschool EEE yet appear to be pulling some of the same bullshit with VSCode (will add link later today)

I entirely agree with you about Just Corp Things driving behaviour

[–] gerikson@awful.systems 9 points 1 year ago (1 children)

Exactly, VSCode is obviously trying to gain a ton of marketshare (everyone will make extensions for it, in a couple years you can't code without it) but MSFT isn't doing it do "hurt FLOSS", they're doing it because they're a developer-focussed company and prefer to keep developers inside the MSFT orbit. It's not conscious, it's mindless, driven by profit-seeking.

And it kinda works. The FLOSS crowd is either blindsided by "AI"[1] (or dismissive of it, which is basically good), and there's not that many people demanding models be both FLOSS and accessible to people w/o giant datacenters. The number of people basically cheering for MSFT in the OpenAI fracas vs. the ones saying "It's AI EEE!!!!" is striking.


[1] I'm in the camp that considers RMS never really understanding the Internet, and that it represents a mortal thread to Free Software, the AGPL notwithstanding. OTOH considering how many of his "fans" are basically fascists, who gives a shit.

[–] earthquake@lemm.ee 6 points 1 year ago (1 children)

RMS

I enjoy that he posts regularly to mastodon and is categorically ignored.

[–] self@awful.systems 6 points 1 year ago (1 children)

oh that is an exceptionally poorly written repost bot:

was Richard distracted with thoughts of delicious foot skin or children when he programmed this trash?

it’s also very funny that the account has 2200 followers and posts every day, but nothing has comments and the most attention his posts get is 1-2 boosts if that

[–] m@blat.at 6 points 1 year ago

@self There’s a definite “Sir, this is an Arby’s” air to his posts.

[–] locallynonlinear@awful.systems 11 points 1 year ago (1 children)

Wouldn't it be funny if, not only do we not get super intelligence in the next couple of years, but we do still get energy, resource, and climate crisises, which we don't get to excuse and kick the can on?

[–] earthquake@lemm.ee 5 points 1 year ago

I am sure governments and corps will continue to kick the can on all of these crises well past every single red line.

[–] Soyweiser@awful.systems 6 points 1 year ago* (last edited 1 year ago)

so that they always remain under human control

Indeed, an AI system can be no longer under human control no matter if it is AGI or not. And Microsoft knows a lot about losing control of systems.

[–] rarely@sh.itjust.works 0 points 1 year ago