mirrorwitch

joined 8 months ago
 

Disposable multiblade razors are objectively worse than safety razors, on all counts. They shave less smooth, while causing more burns. They're cheaper on initial investment but get more expensive very quickly, making you dependent on overpriced replacements and gimmicks that barely last a few uses. That's not counting the "externality costs", which is an euphemism for the costs pushed onto poor countries and nonhuman communities, thanks to the production, transport and disposal of all that single-use plastic (a safety razor is 100% metal, and so are the replacement blades, which come packed in paper).

About the only advantage of disposables is that they're easier to use for beginners. And even that is debatable. When you're a beginner with a safety razor you maybe nick yourself a few times until you learn the skill to follow the curves of your skin. You skin itself maybe gets sensitive at the start, unused to the exfoliation you get during a proper smooth shave. But how long do you think you stay "a beginner" when you shave every day? Like it's not like you're learning to play the violin, it's not that hard of a skill, a week or two tops and it becomes automatic.

But this small barrier to entry is enough, when paired with the bias and interests of razor manufacturers. Marketing goes heavy on the disposables, and you can't find a good quality safety razor or a good deal on replacement blades at the grocery shop, you have to be in the know and order it online. You have to wade through "manly art of the masculine man" forums that will tell you the only real safety razor is custom-made in Tibet by electric monks hand-hammering audiophile alloys and if you don't shave with artisinal castor soap recipes from 300BCE using beaver hair brushes, your skin is going to fall off and rot. Which is to say, safety razors are now a niche product, a hipster thing, a frugalist's obscure economy lifehack. A safety razor is a trivially simple and economic device, it's just a metal holder for a flat blade; but its very superiority now counts against it, it's weaponised to make it look inacessible. People have been trained to think of anything that requires even a little bit of patience or skill as not for them; perversely, even reasonableness can feel like "not for my kind".

Not by accident; since the one thing that disposables do really well is "transferring more of your monthly income to Procter & Gamble shareholders."

I could write a long text very similar to this about how scythes can cut grass cheaper, faster, neater, requiring no input but a whetstone—and some patience to learn the skill but how long does it take to learn that if you're a professional grass-cutter—when compared to the noisy motor blades that fill my morning right now, and every few months, as the landlord sends waves of poorly-paid migrant labour to permanently damage their own sense of hearing along with the dandelions and cloves that the bees need so desperately. But you get the point. More technology does not equal better, even for definitions of "better" that only care for the logic of productivity and ignore the needs (material, emotional, spiritual) of social and ecological communities.


You get where I'm going with this analogy. I keep waiting for the moment where the shoe is going to drop in "generative AI". Where the public at large wakes up like investors waking up to WeWork or the Metaverse, and everyone realises omg what were we thinking this is all bullshit! There's no point at all in using these things to ask questions or to write text or anything else really! But I'm finally accepting that that shoe is never dropping. It's like waiting for the moment when people realise that multi-blade plastic Gilettes are a scam. Not happening, the system isn't set up that way. For as long as you go to the supermarket and this is the "normal" way to shave, that's how shave is going to happen. I wrote before on how "the broken search bar is symbiotic with the bullshitting chatbot": Currently Google "AI" Summary is better than Google Search, not because Google "AI" Summary is good or reliable, but because the search has been internally sabotaged by the incentive structures of web companies. If you're a fellow "AI" refuser and you've been struggling to get any useful results out of web searches, think of how it must feel for people who go for the chatbot, how much easier and more direct. That's the razor we have on the shelves. "AI" doesn't have to work for the scam to be sustainable, it just has to feel like it more or less kinda does most of the time. (No one has ever achieved a close shave on a Gilette Mach 3 but hey, maybe you're prompting it wrong). As long as "generating" something with "AI" feels like it lets you skip even the smallest barrier to entry (like asking a question in a forum of a niche topic). As long as it feels quicker, easier, more convenient.

This is also the case for things like "AI translations" or "AI art" or "vibe coding". The real solution to "AI", like other forms of unnecessarily complex technology, would involve people feeling like they have the time and mental space to do things for pleasure. "AI" is kind of an anaerobic infection, an opportunistic disease caused by lack of oxygen. No one can breathe in this society. The real problem is capitalis—

Now don't get me wrong, the "AI" bubble is still going to pop. There's no way it can't; investors have put more money on this thing than on entire countries, contrary to OpenAI's claims the costs of training and operating keep exploding, and in a world going into recession at some point even capitalists with more money than common sense will have to think of the absence of ROI. But the damage is done. We're in ELIZA world now, and long after OpenAI is dead we'll still be reading books only to find out the gormless translation was "AI", playing games with background "art" "generated" by "AI", interacting online with political agitators spamming nonsense who turn out to be "AI", right until the day when electricity becomes too scarce to be cost-efficient to spam people in this way.

[–] mirrorwitch@awful.systems 7 points 19 hours ago* (last edited 19 hours ago)

Huh, I had missed the part in 2020 when Peter Thiel just flat out stated outright that it only makes sense to be in favour of capitalism if you're a capital owner.

[–] mirrorwitch@awful.systems 4 points 2 days ago

We live in hellworld, please don't get my hopes up...

[–] mirrorwitch@awful.systems 12 points 4 weeks ago (1 children)

jesus fucking christ I think that IDF tweet is the worst thing that has ever existed

[–] mirrorwitch@awful.systems 9 points 1 month ago (8 children)

oh no :(

poor strange she didn't deserve that :(

[–] mirrorwitch@awful.systems 12 points 1 month ago (1 children)

While you all laugh at ChatGPT slop leaving "as a language model..." cruft everywhere, from Twitter political bots to published Springer textbooks, over there in lala land "AIs" are rewriting their reward functions and hacking the matrix and spontaneously emerging mind models of Diplomacy players and generally a week or so from becoming the irresistible superintelligent hypno goddess:

https://www.reddit.com/r/196/comments/1jixljo/comment/mjlexau/

[–] mirrorwitch@awful.systems 11 points 1 month ago* (last edited 1 month ago)

The problem with FOSS for me is the other side of the FOSS surplus: namely corporate encircling of the commons. The free software movement never had a political analysis of the power imbalance between capital owners and workers. This results in the "Freedom 0" dogma, which makes everything workers produce with a genuine communitarian, laudably pro-social sentiment, to be easily coopted and appropriated into the interests of capital owners (for example with embrace-and-extend, network effects, product bundling, or creative backstabbing of the kind Google did to Linux with the Android app store). LLM scrapers are just the latest iteration of this.

A few years back various groups tried to tackle this problem with a shift to "ethical licensing", such as the non-violent license, the anti-capitalist software license, or the do no harm license. While license-based approaches won't stop capitalists from using the commons to target immigrants (NixOS), enable genocide (Meta) or bomb children (Google), this was in my view worthwhile as a rallying cry of sorts; drawing a line in the sand between capital owners and the public. So if you put your free time on a software project meant for everyone and some billionaire starts coopting it, you can at least make it clear it's non-consensual, even if you can't out-lawyer capital owners. But these ethical licenses initiatives didn't seem to make any strides, due to the FOSS culture issue you describe; traditional software repositories didn't acknowledge or make any infrastructure for them, and ethical licenses would still be generically "non-free" in FOSS spaces.

(Personally, I use FOSS operating systems for 26 years now; I've given up on contributing or participating in the "community" a long time ago, burned out by all the bigotry, hostility, and First World-centrism of its forums.)

[–] mirrorwitch@awful.systems 11 points 1 month ago (1 children)

I hate programming but if I wanted to waste any time programming stuff my idea would be something akin to Yahoo! Directory from before Google, or del.icio.us from the 2000s, but distributed, and tied to a PGP-like web of trust system.

You search for a topic, you get links saved with that tag by people you personally validated and trust first, and then by people they trust, and people you don't know but added as probably fine, and so on. Dunno how doable it would be to do something like this.

 

The other day I realised something cursed, and maybe it's obvious but if you didn't think of it either, I now have to further ruin the world for you too.

Do you know how Google took a nosedive some three-four years ago when managers decided that retention matters more for engagement than user success and, as this process continued, all the results are now so vague and corporatey as to make many searches downright unusable? The way that your keywords are now only vague suggestions at best?

And do you know how that downward spiral got even worse after "AI" took off, not only because the Internet is now drowning in signal-shaped noise, not only because of the "AI snippets" that I'm told USA folk are forced to see, but because tech companies have bought into their own scam and started to use "AI" technology internally, with the effect of an overnight qualitative downstep in accuracy, speed, and resource usage?

So. imagine what this all looks like for the people who have substituted the search bar by the "AI" chatbot.

You search something in Google, say, "arrow materials designs Amazonian peoples". You only get fluff articles, clickbait news, videogame wikis, and a ton of identical "AI" noise articles barely connected to the keywords. No depth no details no info. Very frustrating experience.

You ask ChatGPT or Google Gemini or Duck.AI, as if it was a person, as if it had any idea what it's saying: What were the arrows of Amazonian cultures made of? What type of designs did they use? Can you compare arrows from different peoples? How did they change over time, are today's arrows different?

The bot happily responds in a wise, knowledgeable tone, weaving fiction into fact and conjecture into truth. Where it doesn't know something it just makes up an answer-shaped string of words. If you use an academese tone it will respond in a convincing pastiche of a journal article, and even link to references, though if you read the references they don't say what they're claimed to say but who ever checks that? And if you speak like a question-and-answer section it will respond like a geography magazine, and if you ask in a casual tone it will chat like your old buddy; like a succubus it will adapt to what you need it to be, all the while draining all the fluids you need to live.

From your point of view you had a great experience. no irrelevant results, no intrusive suggestion boxes, no spam articles; just you and the wise oracle who answered exactly what you wanted. Sometimes the bot says it doesn't know the answer, but you just ask again with different words ("prompt engineering") and a full answer comes. You compare that experience to the broken search bar. "Wow this is so much better!"

And sure, sometimes you find out an answer was fake, but what did you expect, perfection? It's a new technology and already so impressive, soon¹ they will fix the hallucination problem. It's my own dang fault for being lazy and not double-checking, haha, I'll be more careful next time.²
(1: never.)
(2: never.)

Imagine growing up with this. You've never even seen search bars that work. From your point of view, "AI" is just superior. You see some cool youtuber you like make a 45min detailed analysis of why "AI" does not and cannot ever work, and you're confused: it's already useful for me, though?

Like saying Marconi the mafia don already helped with my shop, what do you mean extortion? Mr Marconi is already beneficial to me? Why he even protected me from those thugs...

Meanwhile, from the point of view of the souless ghouls at Google? Engagement was atrocious when we had search bars that worked. People click the top result and are off their merry way, already out of the site. The search bar that doesn't work is a great improvement, it makes them hang around and click many more things for several minutes, number go up, ad opportunities, great success. And Gemini? whoa. So much user engagement out of Gemini. And how will Ublock Origin ever manage to block Gemini ads when we start monetising it by subtly recommending this or that product seamlessly within the answer text...

[–] mirrorwitch@awful.systems 4 points 1 month ago

Yes to all that, plus the browser thing: How annoying the browsers are with expired certificates. I mean it has to be super hard to allow me to guess that the admin just forgot to renew the certificate, or it wouldn't protect me from the very common threat model of... ähm... uh...

(it's to protect the CA business model, of course.)

[–] mirrorwitch@awful.systems 8 points 2 months ago* (last edited 2 months ago)

small domino: Paul Graham's "Hackers and Painters" (2003)

....

big domino: "AI" "art" "realism"

[–] mirrorwitch@awful.systems 2 points 2 months ago

I see someone else also just learned of this from the bonus episode of the "Bad Hasbara" podcast that's just been made public

[–] mirrorwitch@awful.systems 9 points 3 months ago (7 children)

No, I'm with you. Bad feeling about where this is going.

[–] mirrorwitch@awful.systems 8 points 3 months ago (2 children)

I find it impressive how gen-AI developed a technology that is fine-tuned to generate content that looks precisely passably plausible, but never good enough to be correct or interesting or beautiful or worthwhile in any way.

Like if I was trying to fill the Internet with noise to ruin it, on purpose, I couldn't do better than this. (mostly on accounr of me not having massive data centres nor the moral calousness to spew that much carbon, but still). It's like the ideal infohazard weapon if your goal is to worsen as many lives as you can

 

We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can't "see" the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

view more: next ›