this post was submitted on 29 Oct 2023
30 points (100.0% liked)

SneerClub

989 readers
6 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
 

In today's episode, Yud tries to predict the future of computer science.

you are viewing a single comment's thread
view the rest of the comments
[–] self@awful.systems 9 points 1 year ago (10 children)

a dull headache forms as I imagine a future for programming where the API docs I’m reading are still inaccurate autogenerated bullshit but it’s universal and there’s a layer of incredibly wasteful tech dedicated to tricking me into thinking what I’m reading has any value at all

the headache vastly intensifies when I consider debugging code that broke when the LLM nondeterministically applied a set of optimizations that changed the meaning of the program and the only way to fix it is to reroll the LLM’s seed and hope nothing else breaks

and the worst part is, given how much the programmers I know all seem to love LLMs for some reason, and how bad the tooling around commercial projects (especially web development) is, this isn’t even an unlikely future

[–] froztbyte@awful.systems 6 points 1 year ago (9 children)
[–] self@awful.systems 9 points 1 year ago (5 children)

fucking hell. I’m almost certainly gonna see this trash at work and not know how to react to it, cause the AI fuckers definitely want any criticism of their favorite tech to be a career-limiting move (and they’ll employee any and all underhanded tactics to make sure it is, just like at the height of crypto) but I really don’t want this nonsense anywhere near my working environment

[–] zogwarg@awful.systems 6 points 1 year ago* (last edited 1 year ago) (1 children)

Possible countermeasure: Insist on “crediting” the LLM as the commit author, to regain sanity when doing git blame.

I agree that worse doc is a bad enough future, though I remain optimistic that including LLM in compile step is never going to be mainstream enough (or anything approaching stable enough, beyond some dumb useless smoke and mirrors) for me to have to deal with THAT.

[–] froztbyte@awful.systems 4 points 1 year ago* (last edited 1 year ago)

This also fails as a viable path because version shift (who knows what model version and which LLM deployment version the thing was at, etc etc), but this isn’t the place for that discussion I think

This did however give me the enticing idea that a viable attack vector may be dropping “produced by chatgpt” taglines in things - as malicious compliance anywhere it may cause a process stall

load more comments (3 replies)
load more comments (6 replies)
load more comments (6 replies)