this post was submitted on 14 Jun 2025
120 points (100.0% liked)
TechTakes
1954 readers
444 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Thank God we didn't get help for people in digesting complex topics. Then how would they blame the experts for not making things simple enough that they should have to try learning.
Also, people should learn about complex intelligent systems, and how all of their problems with AI are just problems with capitalism that will still inevitably exist even without AI/the loom.
hey dawg if you want to be anti-capitalist that’s great, but please interrogate yourself on who exactly is developing LLMs and who is running their PR campaigns before you start simping for AI and pretending like a hallucination engine is a helpful tool in general and specifically to help people understand complex topics where precision and nuance are needed and definitely not fucking hallucinations. Please be serious and for real
points at literally every other technology or piece of shared socio-economic infrastructure
gestures more heavily
also checks your sources, whether it's wikipedia, LLMs, or humans! all confabulate!
Dis you:
could you explain how? or how the examples i gave are not as valid to your current direction of critique?
i'm not saying 'i'm intelligent' or 'the system will not abuse these tools'
are you suggesting my understanding is overfit to a certain niche, and there is a flagrant blindspot that wasn't addressed by my earlier comment?
also i use uncommon words for specificity, not to obfuscate. if something hasn't made sense, i would also elaborate. (we also have modern tools to help unravel such things as well, if you don't have a local tutor for the subject.)
or we can just give inaccurate caricatures of each other, and each-others points of view. surely that will do something other than feed the ignorance and division driven socio-economic paperclip maximizer that we are currently stuck in.
Note to the Peanut gallery: this guy knows about paperclipmaxxing but not this more famous comic. Curious. lmfao
holy shit I’m upgrading you to a site-wide ban
so many paragraphs and my eyes don’t want any of them
Incredible work as always, self
this one was definitely my pleasure
“how can you fools not see that Wikipedia’s utterly inaccurate summary LLM is exactly like digital art, 3D art, and CGI, which are all the same thing and are/were universally hated(???)” is a take that only gets more wild the more you think on it too, and that’s one they’ve been pulling out for at least two years
I didn’t catch much else from their posts, cause it’s almost all smarm and absolutely no substance, but fortunately they formatted it like paragraph soup so it slid right off my eyeballs anyway
AI is a pseudoscience that conflates a plagiarism-fueled lying machine with a thinking, living human mind. Fuck off.
Ai doesn't help anyone, its just corporate slop.
You learn to digest deep subjects by reading them.
yes you need to read things to understand them, but also going balls deep into a complex concept or topic with no lube can be pretty rough, and deter the attempt, or future attempts.
also do you know what else is corporate slop? the warner/disney designed art world? every non-silicon paperclip maximizing pattern? the software matters more than the substrate.
the pattern matters more than the tool.
people called digital art/3d art 'slop' for the same reason.
my argument was the same back then. it's not the tool, it's the system.
'CGI doesn't help anyone'
attacking the tool of CGI doesn't help anyone either.
that being said... AI does literally help some people. for many things. google search was my favourite AI tool 25 years ago, but it's definitely not right now.
the slop algorithms were decided by something else even before that. see: enshittification and planned obsolescence.
aka, overfitting towards an objective function in the style of goodheart's law.
also you can read a 'thing' but if you're just over-fitting without making any transferable connections, you're only feeding your understanding of that state-space/specific environment. also other modalities are important. why LLMs aren't 'superintelligent' despite being really good with words. that's an anthropocentric bias in understanding intelligent systems. i know a lot of people who read self help/business novels, which teach flawed heuristics. which books unlearn flawed heuristics?
early reading can lead to better mental models for interacting with counterfactual representations. can we give mental tools for counterfactual representation some hype?
could you dive into that with no teachers/AI to help you? would you be more likely to engage with the help?
it's a complicated situation, but overfitting binary representations is not the solution to navigating complexity.
god I looked at your post history and it’s just all this. 2 years of AI boosterism while cosplaying as a leftist, but the costume keeps slipping
are you not exhausted? you keep posting paragraphs and paragraphs and paragraphs but you’re still just a cosplay leftist arguing for the taste of the boot. don’t you get tired of being like this?
lol
OK, here's your free opportunity to spend more time doing that. Bye now.