this post was submitted on 24 Feb 2025
24 points (100.0% liked)

TechTakes

1675 readers
41 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] froztbyte@awful.systems 10 points 1 week ago (1 children)

looks like they felt that chatgpt pro wasn't losing money fast enough, you can now get sora on the pro sub

load more comments (1 replies)
[–] Soyweiser@awful.systems 10 points 1 week ago (8 children)

Friend of the sub Scott doing more supporting of rightwing extremists. Remember when we cried wolf? Good times.

load more comments (8 replies)
[–] swlabr@awful.systems 9 points 1 week ago* (last edited 1 week ago) (4 children)

A few years ago, maybe a few months after moving to the bay area, a guy from my high school messaged me on linkedin. He was also in the bay, and was wanting to network, I guess? I ghosted him, because I didn’t know him at all, and when I asked my high school friends about him, he got some bad reviews. Anyway today linkedin suggests/shoves a post down my throat where he is proudly talking about working at anthropic. Glad I ghosted!

PS/E: Anthro Pic is definitely a furry term. Is that anything?

load more comments (4 replies)
[–] BlueMonday1984@awful.systems 9 points 1 week ago (9 children)

Ran across a piece of AI hype titled "Is AI really thinking and reasoning — or just pretending to?".

In lieu of sneering the thing, here's some unrelated thoughts:

The AI bubble has done plenty to broach the question of "Can machines think?" that Alan Turing first asked in 1950. From the myriad failures and embarrassments its given us, its given plenty of evidence to suggest they can't - to repeat an old prediction of mine, I expect this bubble is going to kill AI as a concept, utterly discrediting it in the public eye.

On another unrelated note, I expect we're gonna see a sharp change in how AI gets depicted in fiction.

With AI's public image being redefined by glue pizzas and gen-AI slop on one end, and by ethical contraventions and Geneva Recommendations on another end, the bubble's already done plenty to turn AI into a pop-culture punchline, and support of AI into a digital "Kick Me" sign - a trend I expect to continue for a while after the bubble bursts.

For an actual prediction, I predict AI is gonna pop up a lot less in science fiction going forward. Even assuming this bubble hasn't turned audiences and writers alike off of AI as a concept, the bubble's likely gonna make it a lot harder to use AI as a plot device or somesuch without shattering willing suspension of disbelief.

[–] mountainriver@awful.systems 9 points 1 week ago (1 children)

I'm thinking stupid and frustrating AI will become a plot device.

"But if I don't get the supplies I can't save the town!"

"Yeah, sorry, the AI still says no"

load more comments (1 replies)
load more comments (8 replies)
[–] skillissuer@discuss.tchncs.de 8 points 1 week ago (3 children)
[–] BigMuffin69@awful.systems 8 points 1 week ago* (last edited 1 week ago)

Bruh, Anthropic is so cooked. < 1 billion in rev, and 5 billion cash burn. No wonder Dario looks so panicked promising super intelligence + the end of disease in t minus 2 years, he needs to find the world's biggest suckers to shovel the money into the furnace.

As a side note, rumored Claude 3.7(12378752395) benchmarks are making rounds and they are uh, not great. Still trailing o1/o3/grok except for in the "Agentic coding benchmark" (kek), so I guess they went all in on the AI swe angle. But if they aren't pushing the frontier, then there's no way for them to pull customers from Xcels or people who have never heard of Claude in the first place.

On second thought, this is a big brain move. If no one is making API calls to Clauderino, they aren't wasting money on the compute they can't afford. The only winning move is to not play.

[–] BlueMonday1984@awful.systems 8 points 1 week ago

Baldur's given his thoughts on Bluesky - he suspects Zitron's downplayed some of AI's risks, chiefly in coding:

There’s even reason to believe that Ed’s downplaying some of the risks because they’re hard to quantify:

  • The only plausible growth story today for the stock market as a whole is magical “AI” productivity growth. What happens to the market when that story fails?
  • Coding isn’t the biggest “win” for LLMs but its biggest risk

Software dev has a bad habit of skipping research and design and just shipping poorly thought-out prototypes as products. These systems get increasingly harder to update over time and bugs proliferate. LLMs for coding magnify that risk.

We’re seeing companies ship software nobody in the company understands, with edge cases nobody is aware of, and a host of bugs. LLMs lead to code bases that are harder to understand, buggier, and much less secure.

LLMs for coding isn’t a productivity boon but the birth of a major Y2K-style crisis. Fixing Y2K cost the world’s economy over $500 billion USD (corrected for inflation), most of it borne by US institutions and companies.

And Y2K wasn’t promising magical growth on the order of trillions so the perceived loss of a failed AI Bubble in the eyes of the stock market would be much higher

On a related note, I suspect programming/software engineering's public image is going to spectacularly tank in the coming years - between the impending Y2K-style crisis Baldur points out, Silicon Valley going all-in on sucking up to Trump, and the myriad ways the slop-nami has hurt artists and non-artists alike, the pieces are in place to paint an image of programmers as incompetent fools at best and unrepentant fascists at worst.

load more comments (1 replies)
[–] aninjury2all@awful.systems 8 points 1 week ago* (last edited 1 week ago) (4 children)
load more comments (4 replies)
load more comments
view more: ‹ prev next ›