(mods let me know if this aint it)
the only things that ain’t it are my chances of retiring comfortably, but I always knew that’d be the case
(mods let me know if this aint it)
the only things that ain’t it are my chances of retiring comfortably, but I always knew that’d be the case
Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst
it’s fucking wild how PMs react to this kind of thing; the general consensus seems to be that the users are wrong, and that surely whichever awful feature they’re working on will “break through all that hostility” — if the user’s forced (via the darkest patterns imaginable) to use the feature said PM’s trying to boost their metrics for
a terrible place for both information and security
And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.
But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.
you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.
good, use your excel spreadsheet and not a tool that fucking sucks at it
why do you think hallucinating autocomplete can make rules-based decisions reliably
AI analyses it, decides if applicant is entitled to benefits.
why do you think this is simple
and of course, not a single citation for the intro paragraph, which has some real bangers like:
This process involves self-assessment and internal deliberation, aiming to enhance reasoning accuracy, minimize errors (like hallucinations), and increase interpretability. Reflection is a form of "test-time compute," where additional computational resources are used during inference.
because LLMs don’t do self-assessment or internal deliberation, nothing can stop these fucking things from hallucinating, and the only articles I can find for “test-time compute” are blog posts from all the usual suspects that read like ads and some arXiv post apparently too shitty to use as a citation
oh yeah, I’m waiting for David to wake up so he can read the words
the trivial ‘homework’ of starting the rule violation procedure
and promptly explode, cause fielding deletion requests from people like our guests who don’t understand wikipedia’s rules but assume they’re, ah, trivial, is probably a fair-sized chunk of his workload
this would explain so much about the self-declared 10x programmers I’ve met
there’s something fucking hilarious about you and your friend coming here to lecture us about how Wikipedia works, but explaining the joke to you is also going to be tedious as shit and I don’t have any vegan nacho fries or junior mints to improve my mood
also lol @
Vibe coding, sometimes spelled vibecoding
cause I love the kayfabe linguistic drift for a term that’s not even a month old that’s probably seen more use in posts making fun of the original tweet than any of the shit the Wikipedia article says
this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)
but also, hoo boy what a painful talk page