self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 15 points 3 weeks ago (3 children)

did you know: you too can make your dreams come true with Vibe Coding (tm) thanks to this article’s sponsors:

Replit Agent, Cursor Composer, Pythagora, Bolt, Lovable, and Cline

and other shameful assholes with cash to burn trying to astroturf a term from a month old Twitter brainfart into relevance

[–] self@awful.systems 15 points 3 weeks ago (13 children)

no thx, nobody came here for you to assign them tedious homework

[–] self@awful.systems 6 points 4 weeks ago

fuck yeah! it’s a very solid start, and I appreciate the (is that clickbaity enough) in the thumbnail

[–] self@awful.systems 7 points 1 month ago* (last edited 1 month ago) (1 children)

I decided to waste my fucking time and read the awful medium article that keeps getting linked and, boy fucking howdy, it’s exactly what I thought it was. let’s start with the conclusion first:

TLDR: my conclusion is that it is far more likely that Proton and its CEO are actually liberals.

which is just a really weird thing to treat like a revelation when we’ve very recently seen a ton of liberal CEOs implement fash policies, including one (Zuckerberg) who briefly considered running as a Democrat before he was advised that nobody found him the least bit appealing

anyway, let’s go to the quick bullet points this piece of shit deserves:

  • it’s posted by an account that hasn’t done anything else on medium
  • the entire thing is written like stealth PR and a bunch of points are copied straight out of Proton’s marketing. in fact, the tone and structure are so off that I’m just barely not willing to accuse this article of being generated by an LLM, because it’s just barely not repetitive enough to entirely read like AI
  • they keep doing the “nobody (especially the filthy redditors) read Andy or Proton’s actual posts in full” rhetorical technique, which is very funny when people on mastodon were frantically linking archives of those posts after they got deleted, and the posts on Reddit were deleted in a way that was designed to provoke confusion and cover Proton’s tracks. I can’t blame anyone for going on word of mouth if they couldn’t find an archive link.
  • like every liberal-presenting CEO turned shithead, Andy has previously donated a lot of money to organizations associated with the Democrats
  • not a single word about how Proton’s tied up in bitcoin or boosting LLMs and where that places them politically
  • also nothing about how powerless the non-profit associated with Proton is in practice
  • Andy can’t be a shithead, he hired a small handful of feminists and occasionally tweets about how much he supports left-wing causes! you know, on the nazi site
  • e: “However, within the context of Trump’s original post that Andy is quoting, it seems more likely that “big business” = Big Tech, and “little guys” = Little Tech, but this is not obvious if you did not see the original post, and this therefore caused outrage online.” what does this mean. that’s exactly the context I read into Andy’s original post, and it’s a fucking ridiculous thing to say and a massive techfash dogwhistle loud and shrill enough that everybody heard it. it’s fucking weird to falsely claim you’re being misinterpreted and then give an explanation that’s completely in line with the damning shit you’re being accused of, then for someone else to come along and pretend that somehow absolves you

there’s more in there but I’m tired of reading this article, the writing style really is fucking exhausting

e: also can someone tell me how shit like this can persuade anyone? it’s one of the most obvious, least persuasive puff pieces I’ve ever read. did the people who love proton more than they love privacy need something, anything to latch onto to justify how much they like the product?

[–] self@awful.systems 11 points 1 month ago (1 children)

tech apologists love to tell you the legal terms attached to the software you’re using don’t matter, then the instant the obvious happens, they immediately switch to telling you it’s your fault for not reading the legal terms they said weren’t a big deal. this post and its follow-up from the same poster are a particularly good take on this.

also:

When you use Firefox or really any browser, you’re giving it information

nobody gives a fuck about that, we’re all technically gifted enough to realize the browser receives input on interaction. the problem is Mozilla receiving my website addresses, form data, and uploaded files (and much more), and in fact getting a no-restriction license for them and their partners to do what they please with that data. that’s new, that’s what the terms of use cover, and that’s the line they crossed. don’t let anybody get that shit twisted — including the people behind one of the supposedly privacy-focused Firefox forks

[–] self@awful.systems 8 points 1 month ago (1 children)

tons of communities are now insulated to a point where you can’t even get in if you want to, because unless you’re large enough or have enough booster points (which, to no one’s surprise, cost money, and only last for a limited time) you can’t generate permanent invite links, so you gotta know someone to get in.

you fucking what now? I’m unwillingly in so many discords but I avoid anything to do with their shitty micropurchase economy so I didn’t know about this. that’s why so many projects have expired discord links in their docs? holy fuck this is unworkable. discord is a shitty landlord rentseeking from so many open source projects with this crap

[–] self@awful.systems 9 points 1 month ago

legally, it absolutely does, and it gets even worse when you dig deeper. Mozilla is really going all in on being a bunch of marketing creeps.

[–] self@awful.systems 11 points 1 month ago

oh good, the open source discords I’m unwillingly a part of already had an unbearable number of zero effort generated meme images, and now they’ll have even more!

Discord hired a pile of ex-Meta management in mid-2023, to disastrous internal effect.

of course they did. nobody does confidently wrong and incredibly damaging like an ex-Facebook PM

[–] self@awful.systems 13 points 1 month ago (18 children)

so Firefox now has terms of use with this text in them:

When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.

this is bad. it feels like the driving force behind this are the legal requirements behind Mozilla’s AI features that nobody asked for, but functionally these terms give Mozilla the rights to everything you do in Firefox effectively without limitation (because legally, the justification they give could apply to anything you do in your browser)

I haven’t taken the rebranded forks of Firefox very seriously before, but they might be worth taking a close look at now, since apparently these terms of use only apply to the use of mainline Firefox itself and not any of the rebrands

[–] self@awful.systems 13 points 1 month ago (5 children)

after Proton’s latest PR push to paint their CEO as absolutely not a fascist failed to convince much of anyone (feat. a medium article I’m not gonna read cause it’s a waste of my time getting spread around by brand new accounts who mostly only seem to post about how much they like Proton), they decided to quietly bow out of mastodon and switch to the much more private and secure platform of… fucking Reddit of all things, where Proton can moderate critical comments out of existence (unfun fact: in spite of what most redditors believe, there’s no rule against companies moderating their own subs — it’s an etiquete violation, meaning nobody gives a fuck) and accounts that only post in defense of Proton won’t stick out like a sore thumb

[–] self@awful.systems 5 points 1 month ago

you’re fucking right! my brain recombined that into a still wrong but slightly more sane claim when I first read it: “what if the packages you installed lose all their maintainers?” and, like, I think the only package manager that sometimes solves for that is Nix, and it solves it in the most annoying way possible (removal from nixpkgs and your config breaks, instead of any attempt at using an incredibly powerful software archival tool for intentionally archiving software (and it pisses me off that nixpkgs could trivially be the archive.org of packaging and it just isn’t, cause that’s not a murder drone))

but no, something about arch being relatively manually configured broke that poster’s brain into thinking that arch of all things didn’t have basic package management functionality, somehow. arch, the linux for former BSD kids too exhausted to deal with compatibility. nah, only red hat knows about, uh, basic software maintenance

[–] self@awful.systems 8 points 1 month ago (5 children)

it’s beautiful how you can pick out any sentence in that quote and chase down an entire fractal of wrongness

  • “Users are expected to handle system upgrades” nope, pacman does that automatically (though sometimes it’ll fuck your initramfs because arch is a joy)
  • “manage the underlying software stack” ??? that’s all pacman does
  • “configure MAC (Mandatory Access Control), write profiles for it” AppArmor clearly isn’t good enough cause red hat (sploosh) uses selinux
  • “set up kernel module blacklists, and more. Failing to do this results in a less secure operating system.” maybe I’m showing my ass on this one but I don’t think I’ve ever blacklisted a kernel module for security. usually it’s a hacky way to select which driver you want for your device (hello nvidia), stop a buggy device from taking down the system (hello again nvidia! and also like a hundred vendors making shit hardware that barely works on windows, much less linux), and passthru devices that are precious about their init order to qemu (nvidia again? what the fuck)

and bonus wrongness:

For example, DNF in Fedora handles transitions like moving from PulseAudio to PipeWire, which can enhance security and usability.

i fucking love when a distro upgrade breaks audio in all my applications cause red hat suddenly, after over a decade of being utterly nasty about it, got anxious about how much pulseaudio fucking sucks

 

the API is called Web Environment Integrity, and it’s a way to kill ad blockers first and a Google ecosystem lock-in mechanism second, with no other practical use case I can find

 

Winter is coming and Collapse OS aims to soften the blow. It is a Forth (why Forth?) operating system and a collection of tools and documentation with a single purpose: preserve the ability to program microcontrollers through civilizational collapse.

imagine noticing that civilization is collapsing around you and not immediately opening an emacs lisp buffer so you can painstakingly recreate the entire compiler toolchain and runtime environment for the microcontrollers around you as janky code running in your editor. fucking amateurs

 

Wolfram’s post is fucking interminable and consists of about 20% semi-interesting math and 80% goofy shit like deciding that the creepy (to Wolfram) images in the AI model’s probability space must represent how aliens perceive the world. to my memory, this is about par for the course for Wolfram

the orange site decides that the reason why the output isn’t very interesting is because the AI isn’t a robot:

What we see from AI is what you get when you remove the "muscle module", and directly apply the representations onto the paper. There's no considering of how to fill in a pixel; there's just a filling of the pixel directly from the latent space.

It's intriguing. Also makes me wonder if we need to add a module in between the representational output and the pixel output. Something that mimics how we actually use a brush.

this lack of muscle memory is, of course, why we have never done digital art once in the history of humanity. all claims to the contrary are paid conspirators in the pocket of Big Dick Blick

Of course, the AIs can't wake up if we use that analogy. They are not capable of anything more than this state right now.

But to me, lucid dreaming is already a step above the total unconsciousness of just dreaming, or just nothing at all. And wakefulness always follows shortly after I lucid dream.

only 10x lucid dreamers wake up after falling asleep

we can progressively increase the numerical values of the weights—eventually in some sense “blowing the mind” of the network (and going a bit “psychedelic” in the process)

I wonder if there's a more exact analog of the action of psychedelics on the brain that could be performed on generative models?

I always find it interesting how a hero dose of LSD gives similar visuals to what these image AI's do to achieve a coherent image.

[more nonsense]

I feel like the more we get AI to act like humans, and the more those engineers and others use LSD, the more convergence we are going to have with curiosity and breakthroughs about how we function.

the next time you’re in an altered state, I want you to close your eyes and just imagine how annoyed you’d be if one of these shitheads was there with you, trying to get you to “form a BCI” or whatever by typing free association words into ChatGPT

 

you know it’s a fucking banger when you try to collapse the top comment in the thread to skip all the folks litigating over the value of an ebike and more than 2/3rds of the comments in an 884 comment long thread disappear

also featuring many takes from understanders of statistics:

I'm wary about using public roads to test these, but I think the way the data is presented is misleading. I'm not sure how it's misleading, but separating "incidents" into categories (safety, traffic, accident, etc) might be a good start.

For example, I could start coning cruise cars, and cause these numbers to skyrocket. While that's an inconvenience to other drivers, it's not a safety issue at all.

By the way, as a motorcyclist (and thus hyper annoyed at bad driving), I find Uber/Lyft/Food drivers to be both much more dangerous and inconveniencing than these self driving cars.

 

see also the github thread linked in the mastodon post, where the couple of gormless AI hypemen responsible for MDN’s AI features pick a fight with like 30 web developers

from that thread I’ve also found out that most MDN content is written by a collective that exists outside of Mozilla (probably explaining why it took them this long to fuck it up), so my hopes that somebody forks MDN are much higher

 

there’s a fun drinking game you can play where you take a shot whenever the spec devolves into flowery nonsense

§1. Purpose and Scope

The purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs.

It is the second half of this sentence, not the first, that makes DIDComm interesting. “Methodology” implies more than just a mechanism for individual messages, or even for a sequence of them. DIDComm Messaging defines how messages compose into the larger primitive of application-level protocols and workflows, while seamlessly retaining trust. “Built atop … DIDs” emphasizes DIDComm’s connection to the larger decentralized identity movement, with its many attendent virtues.

you shouldn’t have pregamed

 

today Mozilla published a blog post about the AI Help and AI Explain features it deployed to its famously accurate MDN web documentation reference a few days ago. here’s how it’s going according to that post:

We’re only a handful of days into the journey, but the data so far seems to indicate a sense of skepticism towards AI and LLMs in general, while those who have tried the features to find answers tend to be happy with the results.

got that? cool. now let’s check out the developer response on github soon after the AI features were deployed:

it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do.

oh dear

That is demonstrably wrong. There is no demo of that code showing it in action. A developer who uses this code and expects the outcome the AI said to expect would be disappointed (at best).

That was from the very first page I hit that had an accessibility note. Which means I am wary of what genuine user-harming advice this tool will offer on more complex concepts than simple stricken text.

So the "solution" is adding a disclaimer and a survey instead of removing the false information? 🙃 🙃 🙃

This response is clearly wrong in its statement that there is no closing tag, but also incorrect in its statement that all HTML must have a closing tag; while this is correct for XHTML, HTML5 allows for void elements that do not require a closing tag

that doesn’t sound very good! but at least someone vetted the LLM’s answers, right?

MDN core reviewer/maintainer here.

Until @stevefaulkner pinged me about this (thanks, Steve), I myself wasn’t aware that this “AI Explain” thing was added. Nor, as far as I know, were any of the other core reviewers/maintainers aware it’d been added. Nor, as far as I know, did anybody get an OK for this from the MDN Steering Committee (the group of people responsible for governance of MDN) — nor even just inform the Steering Committee about it at all.

The change seems to have landed in the sources two days ago, in e342081 — without any associated issue, instead only a PR at #9188 that includes absolutely not discussion or background info of any kind.

At this point, it looks to me to be something that Mozilla decided to do on their own without giving any heads-up of any kind to any other MDN stakeholders. (I could be wrong; I've been away a bit — a lot of my time over the last month has been spent elsewhere, unfortunately, and that’s prevented me from being able to be doing MDN work I’d have otherwise normally been doing.)

Anyway, this “AI Explain” thing is a monumentally bad idea, clearly — for obvious reasons (but also for the specific reasons that others have taken time to add comments to this issue to help make clear).

(note: the above reply was hidden in the GitHub thread by Mozilla, usually something you only do for off topic replies)

so this thing was pushed into MDN behind the backs of Mozilla’s experts and given only 15 minutes of review (ie, none)? who could have done such a thing?

…so anyway, some kind of space alien comes in and locks the thread:

Hi there, 👋

Thank you all for taking the time to provide feedback about our AI features, AI Explain and AI Help, and to participate in this discussion, which has probably been the most active one in some time. Congratulations to be a part of it! 👏

congratulations to be a part of it indeed

 

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

 

there’s just so much to sneer at in this thread and I’ve got choice paralysis. fuck it, let’s go for this one

everyone thinking Prompt Engineering will go away dont understand how close Prompt Engineering is to management or executive communications. until BCI is perfect, we'll never be done trying to serialize our intent into text for others to consume, whether AI or human.

boy fuck do I hate when my boss wants to know how long a feature will take, so he jacks straight into my cerebral cortex to send me email instead of using zoom like a normal person

 

it’s a short comment thread so far, but it’s got a few posts that are just condensed orange site

The constant quest for "safety" might actually be making our future much less safe. I've seen many instances of users needing to yell at, abuse, or manipulate ChatGPT to get the desired answers. This trains users to be hateful to / frustrated with AI, and if the data is used, it teaches AI that rewards come from such patterns. Wrote an article about this -- https://hackernoon.com/ai-restrictions-reinforce-abusive-user-behavior

But you think humans (by and large) do know what "facts" are?

 

in the least surprising twist of 2023, the ~~extremely mid philosopher~~ visionary AI researcher Douglas Hofstadter has started to voice concerns about chatbots taking over the world

orange site has some takes:

Again, I repeat everyone that is loling at x-risk an idiot and that includes many high profile people with huge egos and counter culture biases. (Hello @pmarca). There is a big movement to call ai doomers many names and generally make fun and dismiss the risk. It is exactly like people laughing at nuclear risks saying its not possible or not a thing even when Einstein and Oppenheimer were warning us. If you belong in this group is up to you.

to quote Major General Thomas Farrell during the Trinity test, “lol. lmao”

gwern in the LW comments:

That is, whatever the snarky "don't worry, it can't happen" tone of his public writings about DL has been since ~2010, Hofstadter has been saying these things in private for at least a decade*, starting somewhere around Deep Blue which clearly falsified a major prediction of his, and his worries about the scaling paradigm intensifying ever since; what has happened is that only one of two paradigms can be true, and Hofstadter has finally flipped to the other paradigm. Mitchell, however, has heard all of this firsthand long before this podcast and appears to be completely immune to Hofstadter's concerns (publicly), so I wouldn't expect it to change her mind.

  • I wonder what other experts & elites have different views on AI than their public statements would lead you to believe?

this is notable as the exact same fucking argument the last flat earther I talked to used, with the words “the firmament” replaced with AI

 

one of hn’s core demographics (windbag grifters) fights with a bunch of skeptics over whether it’s a bad thing the medicine they’re selling is mostly cocaine and alcohol

view more: ‹ prev next ›