self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 8 points 3 weeks ago (6 children)

this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)

but also, hoo boy what a painful talk page

[–] self@awful.systems 11 points 3 weeks ago

(mods let me know if this aint it)

the only things that ain’t it are my chances of retiring comfortably, but I always knew that’d be the case

[–] self@awful.systems 13 points 3 weeks ago

Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst

it’s fucking wild how PMs react to this kind of thing; the general consensus seems to be that the users are wrong, and that surely whichever awful feature they’re working on will “break through all that hostility” — if the user’s forced (via the darkest patterns imaginable) to use the feature said PM’s trying to boost their metrics for

[–] self@awful.systems 10 points 3 weeks ago (2 children)

a terrible place for both information and security

[–] self@awful.systems 13 points 3 weeks ago (4 children)

And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.

But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.

you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.

[–] self@awful.systems 13 points 3 weeks ago

good, use your excel spreadsheet and not a tool that fucking sucks at it

[–] self@awful.systems 12 points 3 weeks ago (2 children)

why do you think hallucinating autocomplete can make rules-based decisions reliably

AI analyses it, decides if applicant is entitled to benefits.

why do you think this is simple

[–] self@awful.systems 22 points 3 weeks ago (2 children)

and of course, not a single citation for the intro paragraph, which has some real bangers like:

This process involves self-assessment and internal deliberation, aiming to enhance reasoning accuracy, minimize errors (like hallucinations), and increase interpretability. Reflection is a form of "test-time compute," where additional computational resources are used during inference.

because LLMs don’t do self-assessment or internal deliberation, nothing can stop these fucking things from hallucinating, and the only articles I can find for “test-time compute” are blog posts from all the usual suspects that read like ads and some arXiv post apparently too shitty to use as a citation

[–] self@awful.systems 12 points 3 weeks ago (1 children)

oh yeah, I’m waiting for David to wake up so he can read the words

the trivial ‘homework’ of starting the rule violation procedure

and promptly explode, cause fielding deletion requests from people like our guests who don’t understand wikipedia’s rules but assume they’re, ah, trivial, is probably a fair-sized chunk of his workload

[–] self@awful.systems 10 points 3 weeks ago (1 children)

this would explain so much about the self-declared 10x programmers I’ve met

[–] self@awful.systems 17 points 3 weeks ago (4 children)

there’s something fucking hilarious about you and your friend coming here to lecture us about how Wikipedia works, but explaining the joke to you is also going to be tedious as shit and I don’t have any vegan nacho fries or junior mints to improve my mood

[–] self@awful.systems 21 points 3 weeks ago (1 children)

also lol @

Vibe coding, sometimes spelled vibecoding

cause I love the kayfabe linguistic drift for a term that’s not even a month old that’s probably seen more use in posts making fun of the original tweet than any of the shit the Wikipedia article says

10
submitted 2 years ago* (last edited 2 years ago) by self@awful.systems to c/techtakes@awful.systems
 

no excerpts yet cause work destroyed me, but this just got posted on the orange site. apparently a couple of urbit devs realized urbit sucks actually. interestingly they correctly call out some of urbit’s worst points (like its incredibly high degree of centralization), but I get the strong feeling that this whole thing is an attempt to launder urbit’s reputation while swapping out the fascists in charge

e: I also have to point out that this is written from the insane perspective that anyone uses urbit for anything at all other than an incredibly inefficient message board and a set of interlocking crypto scams

e2: I didn’t link it initially, but the orange site thread where I found this has heated up significantly since then

 

Science shows that the brain and the rest of the nervous system stops at death. How that relates to the notion of consciousness is still pretty much unknown, and many neuroscientists will tell you that. We haven't yet found an organ or process in the brain responsible for the conscious mind that we can say stops at death.

no matter how many neuroscientists I ask, none of them will tell me which part of the brain contains the soul. the orange site actually has a good sneer for this:

You don't need to know which part of the brain corresponds to a conscious mind when they entire brain is dead.

a lot of the rest of the thread is the most braindead right-libertarian version of Pascal’s Wager I’ve ever seen:

Ultimately, it's their personal choice, with their money, and even if they spend $100,000 on paying for it, or more, it doesn't mean they didn't leave other assets or things for their descendants.

By making a moral claim for why YOU decide that spending that money isn't justified, you're going down one very arrogant and ultimately silly road of making the same claim to so many other things people spend money and effort they've worked hard for on specific personal preferences, be they material or otherwise.

Maybe you buying a $700,000 house vs. a $600,000 house is just as idiotic then? Do you really need the extra floor space or bathrooms?

Where would you draw a line? Should other once-implausible life enhancement therapies that are now widely used and accepted also be forsaken? How about organ transplants? Gene therapy? highly expensive cancer treatments that all have extended life beyond what was previously "natural" for many people? Often these also start first as speculative ideas, then experiments, then just options for the rich, but later become much more widely available.

and therefore the only rational course of action is to put $100,000 straight into the pockets of grifters. how dare I make any value judgments at all about cryonicists based on their extreme distaste for the scientific method, consistent history of failure, and use of extremely exploitative marketing?

 

The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There's actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

 

Running llama-2-7b-chat at 8 bit quantization, and completions are essentially at GPT-3.5 levels on a single 4090 using 15gb VRAM. I don't think most people realize just how small and efficient these models are going to become.

[cut out many, many paragraphs of LLM-generated output which prove… something?]

my chatbot is so small and efficient it only fully utilizes one $2000 graphics card per user! that’s only 450W for as long as it takes the thing to generate whatever bullshit it’s outputting, drawn by a graphics card that’s priced so high not even gamers are buying them!

you’d think my industry would have learned anything at all from being tricked into running loud, hot, incredibly power-hungry crypto mining rigs under their desks for no profit at all, but nah

not a single thought spared for how this can’t possibly be any more cost-effective for OpenAI either; just the assumption that their APIs will somehow always be cheaper than the hardware and energy required to run the model

 

the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

as a v1, you'll notice a lot of jank. known issues are:

  • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
  • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
  • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
  • likewise, comments display a unix epoch instead of a rendered time
  • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
  • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

 

RationalWiki is a highly biased cancel community which has attacked people like Scott Aaronson and Scott Alexander before.

Background on the authors according to a far-left website.

Let's at least be honest.

That is profiling work. (Not just "Ad hominem".)

The clash with the name "rational-wiki" is too strong not to be noted.

as the infrastructure admin of a highly biased far-left cancel community that attacks people like Scott Aaronson and Scott Alexander: mmm delicious

for bonus sneers, see the entire rest of the thread for the orange site’s ideas on why they don’t need therapy:

I was about to start psychotherapy last month, I ask my family's friend therapist If he could recommend me where to go. So he interviewed me for about 30 mins and ask me about all my problems.

A week later he send me the number of the therapist. I didnt write her yet, I think I dont need it as badly as before.

Those 30 mins were key. I am highly introspective and logical, I only needed to orderly speak my problems.

to quote Key & Peele: motherfucker, that’s called a job

 

hey let’s see what the people who killed and buried hacker culture think should go in the jargon file!

If the spirit of the original Jargon file was to be a living document, alas, it failed to keep with the times.

Hackers at large have moved away from Lisp despite Paul Graham and other evangelists […]

Hackers also have moved away from academia at large, and 9-5 jobs at tech behemoths are more natural habitats for them, which also shaped the lingo. I mean, there’s a whole layer of slang usually pertinent to outsourcing agencies and to cubicle farms.

I can’t wait for the corporate-approved jargon file, with any hint of anti-capitalism replaced with fun words and quotes from billionaires to share as the soul leaves my body

So in order for the document to evolve, we need a system to determine consensus. Everyone who cares runs a program on their computer that joins the network and registers their intent. With each proposed change, a query goes out to the network, and it's up to everyone on the network to say yea or nay to the proposal. With enough "yea"s, the document is updated.

...this is starting to sound like a blockchain, isn't it.

for the absolute sake of fuck. coming soon: HackerDAO! collect 10xer tokens and finally prove to the junior devs why corporate gives you so many points to crunch on! vote on fun new jargon, but only if it’s crypto-related! surely you’re hacker enough to be on the pump side of this pump and dump!

 

reposting here for better visibility (let me know if there’s a better way to do this now that we’re federated): the owners of hachyderm.io just started a generative AI for gaming project, and it looks like donations to them will likely end up going to that

 

It's because there is a vast concerted effort by the political left to destroy Musk now that he is no longer regarded as being strictly on their side. It's at Trumpian scale these days, in terms of the venom being directed at him.

Reddit is overflowing with non-stop Musk hatred. They spout lie after lie about him in every single thread where he's a topic. The most popular lie being that his business efforts - ie his success - were funded by an emerald mine that his father owned (neither thing is true).

I say this as an emotionally independent, objective observer of the craziness, I have no stake in it, and don't feel one way or another about it. The seeming mental illness the topic of Musk seems to draw out of people is astounding however, the herd promptly acts like deranged lunatics when he comes up as a topic.

Until Trump I had never seen anything like it before, in person or online. There must be a name for such a massive scale of crowd insanity, to describe the frothing-at-the-mouth irrationality.

my deranged lunatic hivemind venomous mentally ill frothing-at-the-mouth leftist brain can’t handle how rational this fucking asshole’s take on musk and trump is

there’s also this at the top of the thread:

I've been mostly ambivalent about the Musk-era at Twitter—mostly because I just don't care enough to have an opinion.

This, though. This one makes me angry and disappointed.

Twitter has had such a solid brand for so long. It's accomplished things most marketers only dream of: getting a verb like "Tweet" into the standard lexicon is like the pinnacle of branding.

turning one of the most popular sites on the internet into a cesspit of transphobia and nazis: ambivalent

musk fails to appreciate the Twitter brand: angry and disappointed

remember, hacker news is a bastion of high-quality discussion. fucking shitheads

 

Bevy is a fun, cozy game engine to play with if you’re looking for something very flexible that implements some surprisingly advanced features. things I like:

  • it’s all rust, which is an advantage for me and the chemical burns I have from handling the dialect of C++ a lot of older game engines used to be written in
  • it implements a flexible entity component system, which I found pretty great for specifying game and rendering logic for things like roguelikes and simulations, where multiple game systems might interact in dynamic ways
  • the API is very cozy and feels like querying an extremely fast database at times
  • it’s a lot lower level than something like Unity or Godot, but you get some pretty advanced rendering features included
  • the main developer seems to have a lot of industry experience and a solid roadmap
 

Nix is one of the few pieces of software I trust. I use it on just about every computer I work on — awful.systems is managed and deployed by just nixos-rebuild and a deployment flake, as are almost all the computers in my house (including a few embedded into the house itself). in general it makes both software development and configuring Linux a lot more fun compared with the traditional way of doing things

I often call Nix fucking incomprehensible, but it doesn’t need to be. Zero to Nix is one of the documentation projects that’s intended to be a more gentle goal-oriented introduction to Nix concepts, and it’s definitely worth following along if you’re curious about Nix and want to be able to do something useful with it right away

if you end up liking Nix and want more of it, NixOS is an entire Linux distro configured and managed by Nix, and it’s incredibly powerful and stable. I run it on a full-fat gaming PC as my primary OS and the experience of running it is surprisingly very good; feel free to ask and I’ll summarize how I run stuff like games on NixOS

 
view more: ‹ prev next ›