zogwarg

joined 1 year ago
[–] zogwarg@awful.systems 12 points 1 year ago* (last edited 1 year ago) (2 children)

One (simpler) explanation is that proving an absence of something is almost impossible, and that attempting too hard would make them look a heck of a lot guilty.

There is a good reason why the burden of evidence is “innocent until proven guilty”, and yes this extends to the (in your eyes) untrustworthy.

Prove to me you never stole candy from a store as a child (or if you did, replace that accusation with any item of higher value until you hit something you did not steal)

[–] zogwarg@awful.systems 9 points 1 year ago* (last edited 1 year ago)

One of the more disturbing things that happened at work when using MS Word, was the automatic addition of alt-text images. I didn't ask for that, I didn't click any "Please send my images to the cloud, possibly leaking sensitve material, so inference can be run there, to add potentially unhelpful descriptions"

Is document editing really a task that benefits from AI?

An example of unhelpfulness:

I'm torn between at almost praising meek half-assed attempt at accessibility, and shrieking to the heavens about this unweclome shoe-horned addition.

[–] zogwarg@awful.systems 11 points 1 year ago

Either way it's a circus of incompetence.

[–] zogwarg@awful.systems 9 points 1 year ago

Something something Poe's law, something something. Honestly some of the shit i've read should have been satire, but noooooo.

[–] zogwarg@awful.systems 5 points 1 year ago* (last edited 1 year ago)

Absolutely this, shuf would easily come up in a normal google search (even in googles deteriorated relevancy).

For fun, "two" lines of bash + jq can easily achieve the result even without shuf (yes I know this is pointlessly stupid)

cat /usr/share/dict/words | jq -R > words.json
cat /dev/urandom | od -A n -D | jq -r -n '
  import "words" as $w;
  ($w | length) as $l |
  label $out | foreach ( inputs * $l / 4294967295 | floor ) as $r (
    {i:0,a:[]} ;
    .i = (if .a[$r] then .i  else .i + 1 end) | .a[$r] = true ;
    if .i > 100 then break $out else $w[$r] end
  )
'

Incidentally this is code that ChatGPT would be utterly incapable of producing, even as toy example but niche use of jq.

[–] zogwarg@awful.systems 1 points 1 year ago* (last edited 1 year ago)

Almost always sneerious Yud.

[–] zogwarg@awful.systems 4 points 1 year ago* (last edited 1 year ago)

Ah, but each additional sentence strikes home the point of absurd over-abundance!

Quite poetically, the sin of verbosity is commited to create the illusion of considered thought and intelligence, in the case of hpmor literally by stacking books.

Amusingly him describing his attempt as "striking words out" rather than "rewording" or "distilling", i think illustrates his lack of editing ability.

[–] zogwarg@awful.systems 5 points 1 year ago

Fair enough, I will note he fails to specify the actual car to Remote Assistance operator ratio. Here's to hoping that the burstiness readiness staff is not paid pennies when on "stand-by".

[–] zogwarg@awful.systems 15 points 1 year ago* (last edited 1 year ago) (9 children)

It makes you wonder about the specifics:

  • Did the 1.5 workers assigned for each car mostly handle issues with the same cars?
  • Was it a big random pool?
  • Or did each worker have their geographic area with known issues ?

Maybe they could have solved context issues and possible latency issues by seating the workers in the cars, and for extra quick intervention speed put them in the driver's seat. Revolutionary. (Shamelessly stealing adam something's joke format about trains)

[–] zogwarg@awful.systems 6 points 1 year ago* (last edited 1 year ago) (1 children)

Possible countermeasure: Insist on “crediting” the LLM as the commit author, to regain sanity when doing git blame.

I agree that worse doc is a bad enough future, though I remain optimistic that including LLM in compile step is never going to be mainstream enough (or anything approaching stable enough, beyond some dumb useless smoke and mirrors) for me to have to deal with THAT.

[–] zogwarg@awful.systems 5 points 1 year ago (11 children)

In such a (unlikely) future of build tooling corruption, actual plausible terminology:

  • Intent Annotation Prompt (though sensibly, this should be for doc and validation analysis purposes, not compilation)
  • Intent Pragma Prompt (though sensibly, the actual meaning of the code should not change, and it should purely be optimization hints)
[–] zogwarg@awful.systems 16 points 1 year ago (1 children)

Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we're comfortable hearing. I wish I could ask it what the hell people were thinking back then.

I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.

But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨

The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.

view more: ‹ prev next ›