zogwarg

joined 1 year ago
[–] zogwarg@awful.systems 5 points 1 year ago (1 children)

The fact that “artificial intelligence” suggests any form of quality is already a paradox in itself ^^. Would you want to eat an artificial potato? The smokes and mirrors should be baked in.

[–] zogwarg@awful.systems 4 points 1 year ago

I need eye and mind bleach, c'est très ironique tout ça quand même.

[–] zogwarg@awful.systems 8 points 1 year ago (2 children)

Unhinged is another suitable adjective.

It's noteworthy that how the operations plan seems to boil down to "follow you guts" and "trust the vibes", above "Communicating Well" or even "fact-based" and "discussion-based problem solving". It's all very don't think about it, let's all be friends and serve the company like obedient drones.

This reliance on instincts, or the esthetics of relying on instincts, is a disturbing aspect of Rats in general.

[–] zogwarg@awful.systems 11 points 1 year ago

^^ Quietly progressing from humans are not the only ones able to do true learning, to machines are the only ones capable of true learning.

Poetic.

PS: Eek at the *cough* extrapolation rules lawyering 😬.

[–] zogwarg@awful.systems 11 points 1 year ago* (last edited 1 year ago) (1 children)

Not even that! It looks like a blurry jpeg of those sources if you squint a little!

Also I’ve sort of realized that the visualization is misleading in three ways:

  1. They provide an animation from shallow to deep layers to show the dots coming together, making the final result more impressive than it is (look at how many dots are in the ocean)
  2. You see blobby clouds over sub-continents, with nothing to gauge error within the cloud blobs.
  3. Sorta-relevant but obviously the borders as helpfully drawn for the viewer to conform to “Our” world knowledge aren’t even there at all, it’s still holding up a mirror (dare I say a parrot?) to our cognition.
[–] zogwarg@awful.systems 17 points 1 year ago

~~Brawndo~~ Blockchain has got what ~~plants~~ LLMs crave, it's got ~~electrolytes~~ ledgers.

[–] zogwarg@awful.systems 9 points 1 year ago* (last edited 1 year ago) (14 children)

That's the dangerous part:

  • The LLM being just about convincing enough
  • The language being unfamiliar

....................................................

You have no way of judging how correct or how wrong the output is, and no one to hold responsible or be a guarantor.

With the recent release of the heygen drag-drop tool for video translating, and lip-syncing tool, I saw enough people say: "Look isn't it amazing, I can speak Italian now"

No, something makes look like you can, and you have no way of judging how convincing the illusion is. Even if the output is convincing/bluffing to a native speaker, you still can't immediately check that the translation is correct. And again no one to hold accountable.

[–] zogwarg@awful.systems 5 points 1 year ago* (last edited 1 year ago) (1 children)

I said I wouldn't be confident about it, not that enshitification would not occur ^^.

I oscillate between optimisim and pessimism frequently, and for sure ~~some~~ many companies will make bad doo doo decisions. Ultimately trying to learn the grift is not the answer for me though, I'd rather work for some company with at least some practical sense and pretense at an attempt of some form of sustainability.

The mood comes, please forgive the following, indulgent, poem:
Worse before better
Yet comes the AI winter
Ousting the fever

[–] zogwarg@awful.systems 11 points 1 year ago (17 children)

I wouldn't be so confident in replacing junior devs with "AI":

  1. Even if it did work without wasting time, it's unsustainable since junior devs need to acquire these skills, senior devs aren't born from the void, and will eventually graduate/retire.
  2. A junior dev willing to engage their brain, would still iterate through to the correct implementation for cheaper (and potentially faster), than senior devs needing spend time reviewing bullshit implementations, and at arcane attempts of unreliable "AI"-prompting.

It's copy-pasting from stack-overflow all over again. The main consequence I see for LLM based coding assistants, is a new source of potential flaws to watch out for when doing code reviews.

[–] zogwarg@awful.systems 7 points 1 year ago

Pessimistically,
I feel TechBroism is a brand of positivism that will never die, more than one of its brethren is already trying to cast themselves as would-be alchemist promising gold from lead through the arcane uses of AI: "You just need the right prompt."

[–] zogwarg@awful.systems 8 points 1 year ago (1 children)

Honestly this could be an improvement compared to what is currently in use by the current french tax collection agency.

The DGFiP uses an up until recently closed-source custom language called "M", which does not have the friendliest/most readable syntax, and that the guys at INRIA (French National Institute for Research in Digital Science and Technology, the same lab that seems to have spat out CatalaLang) had to reverse engineer a modern compiler when open sourcing the tax calculation software was newly mandated.

Witness this horrid glory, sadly only in French: chap-1.m

Could also be intended for other horrid COBOL output cases:

For example, the compiler can generate Javascript for web applications, SAS for economic models and COBOL for legacy environments

Trying to approach and make visible the relationship with the laws as written, so that it can potentially be reviewed by non domain-experts doesn't appear to me to be the worst possible goal out there. (They seem to be trying interleaved markdown format), the bigger/broader claims in the about/readme sections might just be required bells and whistles for proper grant funding/thesis presentation.

view more: ‹ prev next ›