blakestacey

joined 2 years ago
MODERATOR OF
[–] blakestacey@awful.systems 8 points 3 weeks ago

Here, have a community ban to enforce that self-proclaimed flounce.

[–] blakestacey@awful.systems 15 points 3 weeks ago

Air so polluted it makes people sick, but it's all worth it because you can't be arsed to remember the syntax of a for loop.

[–] blakestacey@awful.systems 4 points 3 weeks ago (1 children)

Nit: It's "Death and the Gorgon".

It's linked here, so I'll hazard a guess that the copy is intended to be public.

[–] blakestacey@awful.systems 17 points 3 weeks ago

The pro-child-porn caucus.

[–] blakestacey@awful.systems 11 points 3 weeks ago (1 children)

Science writer Philip Ball observes,

Just watched Eric Schmidt (former Google CEO) say "We believe as an industry... that within 3-5 years we'll have AGI, which can be defined as a system that is as smart as [big deal voice] the smartest mathematician, physicist, [lesser deal voice] artist, writer, thinker, politician ... I call this the San Francisco consensus, because everyone who believes this is in San Francisco... Within the next year or two, this foundation gets locked in, and we're not going to stop it. It gets much more interesting after that...There will be computers that are smarter than the sum of humans"

"Everyone who believes this is in San Francisco" approaches "the female orgasm is a myth" levels of self-own.

[–] blakestacey@awful.systems 6 points 4 weeks ago (1 children)

Back in the twenty-aughts, I wrote a science fiction murder mystery involving the invention of artificial intelligence. That whole plot angle feels dead today, even though the AI in question was, you know, in the Commander Data tradition, not the monstrosities of mediocrity we're suffering through now. (The story was also about a stand-in for the United States rebuilding itself after a fascist uprising, the emotional aftereffects of the night when shooting the fascists was necessary to stop them, queer loneliness and other things that maybe hold up better.)

[–] blakestacey@awful.systems 8 points 4 weeks ago

Being unsure of whether you want to fuck robo-Maria or be robo-Maria is a classic sign of bisexuality among reconstructors of lost film media.

Yes, it's a niche, but you know it's not an empty niche.

[–] blakestacey@awful.systems 11 points 4 weeks ago

I've noticed the occasional joke about how new computer technology, or LLMs specifically, have changed the speaker's perspective about older science fiction. E.g., there was one that went something like, "I was always confused about how Picard ordered his tea with the weird word order and exactly the same inflection every time, but now I recognize that's the tea order of a man who has learned precisely what is necessary to avoid the replicator delivering you an ocelot instead."

Notice how in TNG, everyone treats a PADD as a device that holds exactly one document and has to be physically handed to a person? The Doylist explanation is that it's a show from 1987 and everyone involved thought of them as notebooks. But the Watsonian explanation is that a device that holds exactly one document and zero distractions is the product of a society more psychologically healthy than ours....

[–] blakestacey@awful.systems 4 points 4 weeks ago

🎵 I'm a drop-shipping girl / in a shittified world / chat me up / bot me down / let's go party! 🎵

[–] blakestacey@awful.systems 2 points 4 weeks ago

Having now refreshed my vague memories of the Feynman Lectures on Computation, I wouldn't recommend them as a first introduction to Turing machines and the halting problem. They're overburdened with detail: You can tell that Feynman was gleeful over figuring out how to make a Turing machine that tests parentheses for balance, but for many readers, it'll get in the way of the point. Comparing his discussion of the halting problem to the one in The Princeton Companion to Mathematics, for example, the latter is cleaner without losing anything that a first encounter would need. Feynman's lecture is more like a lecture from the second week of a course, missing the first week.

[–] blakestacey@awful.systems 4 points 4 weeks ago (8 children)

Comment removed for being weird (derogatory). I refrained just barely from hitting the "ban from community" button on the slim chance it was a badly misfired joke from a person who can otherwise behave themself, but I won't object if any other mod goes ahead with the banhammer.

 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut'n'paste it into its own post, there’s no quota here and the bar really isn't that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

 

If you've been around, you may know Elsevier for surveillance publishing. Old hands will recall their running arms fairs. To this storied history we can add "automated bullshit pipeline".

In Surfaces and Interfaces, online 17 February 2024:

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2].

In Radiology Case Reports, online 8 March 2024:

In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice.

Edit to add this erratum:

The authors apologize for including the AI language model statement on page 4 of the above-named article, below Table 3, and for failing to include the Declaration of Generative AI and AI-assisted Technologies in Scientific Writing, as required by the journal’s policies and recommended by reviewers during revision.

Edit again to add this article in Urban Climate:

The World Health Organization (WHO) defines HW as “Sustained periods of uncharacteristically high temperatures that increase morbidity and mortality”. Certainly, here are a few examples of evidence supporting the WHO definition of heatwaves as periods of uncharacteristically high temperatures that increase morbidity and mortality

And this one in Energy:

Certainly, here are some potential areas for future research that could be explored.

Can't forget this one in TrAC Trends in Analytical Chemistry:

Certainly, here are some key research gaps in the current field of MNPs research

Or this one in Trends in Food Science & Technology:

Certainly, here are some areas for future research regarding eggplant peel anthocyanins,

And we mustn't ignore this item in Waste Management Bulletin:

When all the information is combined, this report will assist us in making more informed decisions for a more sustainable and brighter future. Certainly, here are some matters of potential concern to consider.

The authors of this article in Journal of Energy Storage seems to have used GlurgeBot as a replacement for basic formatting:

Certainly, here's the text without bullet points:

 

In which a man disappearing up his own asshole somehow fails to be interesting.

 

So, there I was, trying to remember the title of a book I had read bits of, and I thought to check a Wikipedia article that might have referred to it. And there, in "External links", was ... "Wikiversity hosts a discussion with the Bard chatbot on Quantum mechanics".

How much carbon did you have to burn, and how many Kenyan workers did you have to call the N-word, in order to get a garbled and confused "history" of science? (There's a lot wrong and even self-contradictory with what the stochastic parrot says, which isn't worth unweaving in detail; perhaps the worst part is that its statement of the uncertainty principle is a blurry JPEG of the average over all verbal statements of the uncertainty principle, most of which are wrong.) So, a mediocre but mostly unremarkable page gets supplemented with a "resource" that is actively harmful. Hooray.

Meanwhile, over in this discussion thread, we've been taking a look at the Wikipedia article Super-recursive algorithm. It's rambling and unclear, throwing together all sorts of things that somebody somewhere called an exotic kind of computation, while seemingly not grasping the basics of the ordinary theory the new thing is supposedly moving beyond.

So: What's the worst/weirdest Wikipedia article in your field of specialization?

 

The day just isn't complete without a tiresome retread of freeze peach rhetorical tropes. Oh, it's "important to engage with and understand" white supremacy. That's why we need to boost the voices of white supremacists! And give them money!

 

With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety"/doomer nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes).

 

Flashback time:

One of the most important and beneficial trainings I ever underwent as a young writer was trying to script a comic. I had to cut down all of my dialogue to fit into speech bubbles. I was staring closely at each sentence and striking out any word I could.

"But then I paid for Twitter!"

 

AI doctors will revolutionize medicine! You'll go to a service hosted in Thailand that can't take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that you're following a script; so you can get the prescription the first AI told you to get.

Can't get mifepristone or puberty blockers? Just have a chatbot teach you how to cast Persuasion!

 

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

 

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

 

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

view more: ‹ prev next ›