self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 15 points 3 weeks ago (6 children)

if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.

[–] self@awful.systems 10 points 3 weeks ago

seems like garbage to me

[–] self@awful.systems 20 points 3 weeks ago (6 children)

the promptfondlers that make their way into our threads sometimes try to brag about how the LLM is the only way to do basic editor tasks, like wrapping symbols in brackets or diffing logs. it’s incredible every time

[–] self@awful.systems 9 points 3 weeks ago

me too. this heel turn is disappointing as hell, and I suspected fuckery at first, but the video excerpts Rebecca clipped and Conover’s actions on Twitter since then make it pretty clear he did this willingly.

[–] self@awful.systems 33 points 3 weeks ago (1 children)

also, fucking ew:

Needs to be put in it’s place like a misbehaving dog, lol

why do AI guys always have weird power fantasies about how they interact with their slop machines

[–] self@awful.systems 17 points 3 weeks ago

given your posts in this thread, I don’t think I trust your judgement on what less annoying looks like

[–] self@awful.systems 18 points 3 weeks ago (15 children)

everybody’s loving Adam Conover, the comedian skeptic who previously interviewed Timnit Gebru and Emily Bender, organized as part of the last writer’s strike, and generally makes a lot of somewhat left-ish documentary videos and podcasts for a wide audience

5 seconds later

we regret to inform you that Adam Conover got paid to do a weird ad and softball interview for Worldcoin of all things and is now trying to salvage his reputation by deleting his Twitter posts praising it under the guise of pseudo-skepticism

[–] self@awful.systems 9 points 3 weeks ago (2 children)

I’m gonna do something now that prob isn’t that allowed, nor relevant for the things we talk about

I consider this both allowed and relevant, though I unfortunately can’t sign it myself

[–] self@awful.systems 7 points 4 weeks ago

we sincerely hope not

[–] self@awful.systems 6 points 4 weeks ago (2 children)

it was some shitty follow-up to your joke so unfunny it made your post less funny just by being under it. pull this thread from mastodon and chances are you’ll see it if you really want to

anyway if you want a laugh, they threw a tantrum and reported your post because we deleted theirs:

their joke was in that exact tone too because they’re a comedy black hole

[–] self@awful.systems 4 points 1 month ago

it’s really weird that this turned into a tantrum where you tried to report other users for their jokes???

[–] self@awful.systems 5 points 1 month ago

wow, your posts really aren’t good

 

the API is called Web Environment Integrity, and it’s a way to kill ad blockers first and a Google ecosystem lock-in mechanism second, with no other practical use case I can find

 

Winter is coming and Collapse OS aims to soften the blow. It is a Forth (why Forth?) operating system and a collection of tools and documentation with a single purpose: preserve the ability to program microcontrollers through civilizational collapse.

imagine noticing that civilization is collapsing around you and not immediately opening an emacs lisp buffer so you can painstakingly recreate the entire compiler toolchain and runtime environment for the microcontrollers around you as janky code running in your editor. fucking amateurs

 

Wolfram’s post is fucking interminable and consists of about 20% semi-interesting math and 80% goofy shit like deciding that the creepy (to Wolfram) images in the AI model’s probability space must represent how aliens perceive the world. to my memory, this is about par for the course for Wolfram

the orange site decides that the reason why the output isn’t very interesting is because the AI isn’t a robot:

What we see from AI is what you get when you remove the "muscle module", and directly apply the representations onto the paper. There's no considering of how to fill in a pixel; there's just a filling of the pixel directly from the latent space.

It's intriguing. Also makes me wonder if we need to add a module in between the representational output and the pixel output. Something that mimics how we actually use a brush.

this lack of muscle memory is, of course, why we have never done digital art once in the history of humanity. all claims to the contrary are paid conspirators in the pocket of Big Dick Blick

Of course, the AIs can't wake up if we use that analogy. They are not capable of anything more than this state right now.

But to me, lucid dreaming is already a step above the total unconsciousness of just dreaming, or just nothing at all. And wakefulness always follows shortly after I lucid dream.

only 10x lucid dreamers wake up after falling asleep

we can progressively increase the numerical values of the weights—eventually in some sense “blowing the mind” of the network (and going a bit “psychedelic” in the process)

I wonder if there's a more exact analog of the action of psychedelics on the brain that could be performed on generative models?

I always find it interesting how a hero dose of LSD gives similar visuals to what these image AI's do to achieve a coherent image.

[more nonsense]

I feel like the more we get AI to act like humans, and the more those engineers and others use LSD, the more convergence we are going to have with curiosity and breakthroughs about how we function.

the next time you’re in an altered state, I want you to close your eyes and just imagine how annoyed you’d be if one of these shitheads was there with you, trying to get you to “form a BCI” or whatever by typing free association words into ChatGPT

 

you know it’s a fucking banger when you try to collapse the top comment in the thread to skip all the folks litigating over the value of an ebike and more than 2/3rds of the comments in an 884 comment long thread disappear

also featuring many takes from understanders of statistics:

I'm wary about using public roads to test these, but I think the way the data is presented is misleading. I'm not sure how it's misleading, but separating "incidents" into categories (safety, traffic, accident, etc) might be a good start.

For example, I could start coning cruise cars, and cause these numbers to skyrocket. While that's an inconvenience to other drivers, it's not a safety issue at all.

By the way, as a motorcyclist (and thus hyper annoyed at bad driving), I find Uber/Lyft/Food drivers to be both much more dangerous and inconveniencing than these self driving cars.

 

see also the github thread linked in the mastodon post, where the couple of gormless AI hypemen responsible for MDN’s AI features pick a fight with like 30 web developers

from that thread I’ve also found out that most MDN content is written by a collective that exists outside of Mozilla (probably explaining why it took them this long to fuck it up), so my hopes that somebody forks MDN are much higher

 

there’s a fun drinking game you can play where you take a shot whenever the spec devolves into flowery nonsense

§1. Purpose and Scope

The purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs.

It is the second half of this sentence, not the first, that makes DIDComm interesting. “Methodology” implies more than just a mechanism for individual messages, or even for a sequence of them. DIDComm Messaging defines how messages compose into the larger primitive of application-level protocols and workflows, while seamlessly retaining trust. “Built atop … DIDs” emphasizes DIDComm’s connection to the larger decentralized identity movement, with its many attendent virtues.

you shouldn’t have pregamed

 

today Mozilla published a blog post about the AI Help and AI Explain features it deployed to its famously accurate MDN web documentation reference a few days ago. here’s how it’s going according to that post:

We’re only a handful of days into the journey, but the data so far seems to indicate a sense of skepticism towards AI and LLMs in general, while those who have tried the features to find answers tend to be happy with the results.

got that? cool. now let’s check out the developer response on github soon after the AI features were deployed:

it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do.

oh dear

That is demonstrably wrong. There is no demo of that code showing it in action. A developer who uses this code and expects the outcome the AI said to expect would be disappointed (at best).

That was from the very first page I hit that had an accessibility note. Which means I am wary of what genuine user-harming advice this tool will offer on more complex concepts than simple stricken text.

So the "solution" is adding a disclaimer and a survey instead of removing the false information? 🙃 🙃 🙃

This response is clearly wrong in its statement that there is no closing tag, but also incorrect in its statement that all HTML must have a closing tag; while this is correct for XHTML, HTML5 allows for void elements that do not require a closing tag

that doesn’t sound very good! but at least someone vetted the LLM’s answers, right?

MDN core reviewer/maintainer here.

Until @stevefaulkner pinged me about this (thanks, Steve), I myself wasn’t aware that this “AI Explain” thing was added. Nor, as far as I know, were any of the other core reviewers/maintainers aware it’d been added. Nor, as far as I know, did anybody get an OK for this from the MDN Steering Committee (the group of people responsible for governance of MDN) — nor even just inform the Steering Committee about it at all.

The change seems to have landed in the sources two days ago, in e342081 — without any associated issue, instead only a PR at #9188 that includes absolutely not discussion or background info of any kind.

At this point, it looks to me to be something that Mozilla decided to do on their own without giving any heads-up of any kind to any other MDN stakeholders. (I could be wrong; I've been away a bit — a lot of my time over the last month has been spent elsewhere, unfortunately, and that’s prevented me from being able to be doing MDN work I’d have otherwise normally been doing.)

Anyway, this “AI Explain” thing is a monumentally bad idea, clearly — for obvious reasons (but also for the specific reasons that others have taken time to add comments to this issue to help make clear).

(note: the above reply was hidden in the GitHub thread by Mozilla, usually something you only do for off topic replies)

so this thing was pushed into MDN behind the backs of Mozilla’s experts and given only 15 minutes of review (ie, none)? who could have done such a thing?

…so anyway, some kind of space alien comes in and locks the thread:

Hi there, 👋

Thank you all for taking the time to provide feedback about our AI features, AI Explain and AI Help, and to participate in this discussion, which has probably been the most active one in some time. Congratulations to be a part of it! 👏

congratulations to be a part of it indeed

 

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

 

there’s just so much to sneer at in this thread and I’ve got choice paralysis. fuck it, let’s go for this one

everyone thinking Prompt Engineering will go away dont understand how close Prompt Engineering is to management or executive communications. until BCI is perfect, we'll never be done trying to serialize our intent into text for others to consume, whether AI or human.

boy fuck do I hate when my boss wants to know how long a feature will take, so he jacks straight into my cerebral cortex to send me email instead of using zoom like a normal person

 

it’s a short comment thread so far, but it’s got a few posts that are just condensed orange site

The constant quest for "safety" might actually be making our future much less safe. I've seen many instances of users needing to yell at, abuse, or manipulate ChatGPT to get the desired answers. This trains users to be hateful to / frustrated with AI, and if the data is used, it teaches AI that rewards come from such patterns. Wrote an article about this -- https://hackernoon.com/ai-restrictions-reinforce-abusive-user-behavior

But you think humans (by and large) do know what "facts" are?

 

in the least surprising twist of 2023, the ~~extremely mid philosopher~~ visionary AI researcher Douglas Hofstadter has started to voice concerns about chatbots taking over the world

orange site has some takes:

Again, I repeat everyone that is loling at x-risk an idiot and that includes many high profile people with huge egos and counter culture biases. (Hello @pmarca). There is a big movement to call ai doomers many names and generally make fun and dismiss the risk. It is exactly like people laughing at nuclear risks saying its not possible or not a thing even when Einstein and Oppenheimer were warning us. If you belong in this group is up to you.

to quote Major General Thomas Farrell during the Trinity test, “lol. lmao”

gwern in the LW comments:

That is, whatever the snarky "don't worry, it can't happen" tone of his public writings about DL has been since ~2010, Hofstadter has been saying these things in private for at least a decade*, starting somewhere around Deep Blue which clearly falsified a major prediction of his, and his worries about the scaling paradigm intensifying ever since; what has happened is that only one of two paradigms can be true, and Hofstadter has finally flipped to the other paradigm. Mitchell, however, has heard all of this firsthand long before this podcast and appears to be completely immune to Hofstadter's concerns (publicly), so I wouldn't expect it to change her mind.

  • I wonder what other experts & elites have different views on AI than their public statements would lead you to believe?

this is notable as the exact same fucking argument the last flat earther I talked to used, with the words “the firmament” replaced with AI

 

one of hn’s core demographics (windbag grifters) fights with a bunch of skeptics over whether it’s a bad thing the medicine they’re selling is mostly cocaine and alcohol

view more: ‹ prev next ›