this post was submitted on 10 Apr 2025
40 points (100.0% liked)

TechTakes

1777 readers
100 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] will_a113@lemmy.ml 6 points 2 days ago (1 children)

I wonder how much "left-leaning" (a.k.a. in sync with objective reality) content would be needed to reduce the effectiveness of these kinds of efforts.

Like, if a million left-leaning people who still had Twiter/FB/whatever accounts just hooked them up to some kind of LLM service that did nothing but find hard-right content and reply with reasoned replies (so, no time wasted, just some money for the LLM) would that even do anything? What about similar on CNN or local newspaper comment sections?

It seems like there would have to be some amount of new content generated that would start forcing newly-trained models back toward the center unless the LLM builders were just bent on filtering it all out.

[–] corbin@awful.systems 3 points 1 day ago (1 children)

In practice, the behaviors that the chatbots learn in post-training are FUD and weasel-wording; they appear to not unlearn facts, but to learn so much additional nuance as to bury the facts. The bots perform worse on various standardized tests about the natural world after post-training; there are quantitative downsides to forcing them to adopt any particular etiquette, including speaking like a chud.

The problem is mostly that the uninformed public will think that the chatbot is knowledgeable and well-spoken because it rattles off the same weak-worded hedges as right-wing pundits, and it's addressed by the same improvements in education required to counter those pundits.

Answering your question directly: no, slop machines can't be countered with more slop machines without drowning us all in slop. A more direct approach will be required.

[–] will_a113@lemmy.ml -2 points 1 day ago (2 children)

Do you have any sources on this? I started looking around for pre-training, training and post-training impact of new input but didn't find what I was looking for. In just my own experience with retraining (e.g. fine-tuning) pre-trained models, it seems to be pretty easy to add or remove data to get significantly different results than the original model.

[–] corbin@awful.systems 4 points 13 hours ago

It's well-known folklore that reinforcement learning with human feedback (RLHF), the standard post-training paradigm, reduces "alignment," the degree to which a pre-trained model has learned features of reality as it actually exists. Quoting from the abstract of the 2024 paper, Mitigating the Alignment Tax of RLHF (alternate link):

LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained abilities, which is also known as the alignment tax.

[–] sailor_sega_saturn@awful.systems 5 points 14 hours ago

My go to source for the fact that LLM chatbots suck at writing reasoned replies is https://chatgpt.com/