self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 5 points 3 months ago (4 children)

that’s fair, and I can’t argue with the final output

[–] self@awful.systems 8 points 3 months ago (6 children)

this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)

but also, hoo boy what a painful talk page

[–] self@awful.systems 11 points 3 months ago

(mods let me know if this aint it)

the only things that ain’t it are my chances of retiring comfortably, but I always knew that’d be the case

[–] self@awful.systems 13 points 3 months ago

Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst

it’s fucking wild how PMs react to this kind of thing; the general consensus seems to be that the users are wrong, and that surely whichever awful feature they’re working on will “break through all that hostility” — if the user’s forced (via the darkest patterns imaginable) to use the feature said PM’s trying to boost their metrics for

[–] self@awful.systems 10 points 3 months ago (2 children)

a terrible place for both information and security

[–] self@awful.systems 13 points 3 months ago (4 children)

And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.

But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.

you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.

[–] self@awful.systems 13 points 3 months ago

good, use your excel spreadsheet and not a tool that fucking sucks at it

[–] self@awful.systems 12 points 3 months ago (2 children)

why do you think hallucinating autocomplete can make rules-based decisions reliably

AI analyses it, decides if applicant is entitled to benefits.

why do you think this is simple

[–] self@awful.systems 22 points 3 months ago (2 children)

and of course, not a single citation for the intro paragraph, which has some real bangers like:

This process involves self-assessment and internal deliberation, aiming to enhance reasoning accuracy, minimize errors (like hallucinations), and increase interpretability. Reflection is a form of "test-time compute," where additional computational resources are used during inference.

because LLMs don’t do self-assessment or internal deliberation, nothing can stop these fucking things from hallucinating, and the only articles I can find for “test-time compute” are blog posts from all the usual suspects that read like ads and some arXiv post apparently too shitty to use as a citation

[–] self@awful.systems 12 points 3 months ago (1 children)

oh yeah, I’m waiting for David to wake up so he can read the words

the trivial ‘homework’ of starting the rule violation procedure

and promptly explode, cause fielding deletion requests from people like our guests who don’t understand wikipedia’s rules but assume they’re, ah, trivial, is probably a fair-sized chunk of his workload

[–] self@awful.systems 10 points 3 months ago (1 children)

this would explain so much about the self-declared 10x programmers I’ve met

[–] self@awful.systems 17 points 3 months ago (4 children)

there’s something fucking hilarious about you and your friend coming here to lecture us about how Wikipedia works, but explaining the joke to you is also going to be tedious as shit and I don’t have any vegan nacho fries or junior mints to improve my mood

view more: ‹ prev next ›