this post was submitted on 10 Apr 2025
40 points (100.0% liked)

TechTakes

1777 readers
100 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] will_a113@lemmy.ml -2 points 1 day ago (2 children)

Do you have any sources on this? I started looking around for pre-training, training and post-training impact of new input but didn't find what I was looking for. In just my own experience with retraining (e.g. fine-tuning) pre-trained models, it seems to be pretty easy to add or remove data to get significantly different results than the original model.

[–] corbin@awful.systems 4 points 13 hours ago

It's well-known folklore that reinforcement learning with human feedback (RLHF), the standard post-training paradigm, reduces "alignment," the degree to which a pre-trained model has learned features of reality as it actually exists. Quoting from the abstract of the 2024 paper, Mitigating the Alignment Tax of RLHF (alternate link):

LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained abilities, which is also known as the alignment tax.

[–] sailor_sega_saturn@awful.systems 5 points 14 hours ago

My go to source for the fact that LLM chatbots suck at writing reasoned replies is https://chatgpt.com/