Rupert Murdoch’s News Corporation fills its tabloid papers across Australia with right-wing slop. Now the slop will come from a chatbot — and not a human slop churner.
The quality of its tabloids will remain exactly the same, I presume.
Rupert Murdoch’s News Corporation fills its tabloid papers across Australia with right-wing slop. Now the slop will come from a chatbot — and not a human slop churner.
The quality of its tabloids will remain exactly the same, I presume.
r/cursor is the gift that keeps on giving:
Don't Date Robots!
Kill them instead
In other news, IETF 127 (which is being held in November) is facing a boycott months in advance. The reason? Its being held in the United States.
This likely applies to a lot of things, but that would have been unthinkable before the election.
At this point, using AI in any sort of creative context is probably gonna prompt major backlash, and the idea of AI having artistic capabilities is firmly dead in the water.
On a wider front (and to repeat an earlier prediction), I suspect that the arts/humanities are gonna gain some begrudging respect in the aftermath of this bubble, whilst tech/STEM loses a significant chunk.
For arts, the slop-nami has made "AI" synonymous with "creative sterility" and likely painted the field as, to copy-paste a previous comment, "all style, no subtance, and zero understanding of art, humanities, or how to be useful to society"
For humanities specifically, the slop-nami has also given us a nonstop parade of hallucination-induced mishaps and relentless claims of AGI too numerous to count - which, combined with the increasing notoriety of TESCREAL, could help the humanities look grounded and reasonable by comparison.
(Not sure if this makes sense - it was 1AM where I am when I wrote this)
We can add that to the list of things threatening to bring FOSS as a whole crashing down.
Plus the culture being utterly rancid, the large-scale AI plagiarism, the declining industry surplus FOSS has taken for granted, having Richard Stallman taint the whole movement by association, the likely-tanking popularity of FOSS licenses, AI being a general cancer on open-source and probably a bunch of other things I've failed to recognise or make note of.
FOSS culture being a dumpster fire is probably the biggest long-term issue - fixing that requires enough people within the FOSS community to recognise they're in a dumpster fire, and care about developing the distinctly non-technical skills necessary to un-fuck the dumpster fire.
AI's gonna be the more immediately pressing issue, of course - its damaging the commons by merely existing.
Update on the Vibe Coder Catastrophe^tm^: he's killed his current app and seems intent to vibe code again:
Personally, I expect this case won't be the last "vibe coded" app/website/fuck-knows-what to get hacked to death - security is virtually nonexistent, and the business/techbros who'd be attracted to it are unlikely to learn from their mistakes.
I knew that was Kaze Emanuar before I even clicked the link.
New piece from Brian Merchant: DOGE's 'AI-first' strategist is now the head of technology at the Department of Labor, which is about...well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:
“I think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,” Blanc tells me. “That's much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.”
How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as "improving efficiency" or "politically neutral" or some random claptrap like that. Between Musk's own crippling incompetence, AI's utterly rancid public image, and a variety of factors I likely haven't factored in, imposing them will likely prove harder than they thought.
(I'd also like to recommend James Allen-Robertson's "Devs and the Culture of Tech" which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)
TV Tropes got an official app, featuring an AI "story generator". Unsurprisingly, backlash was swift, to the point where the admins were promising to nuke it "if we see that users don't find the story generator helpful".
Nice to get a look on the inside from one of the 21st-century Oppenheimers.