this post was submitted on 10 Apr 2025
39 points (100.0% liked)

TechTakes

1777 readers
98 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old

Zuck the Cuck can't handle reality and so is pushing lies, as usual.

You are probably going to be a very successful computer person. But you're going to go through life thinking that girls don't like you because you're a nerd. And I want you to know, from the bottom of my heart, that that won't be true. It'll be because you're an asshole.

The Social Network had it right.

[–] cerement@slrpnk.net 27 points 2 days ago

both the near-right and the far-right

[–] Cruxifux@feddit.nl 19 points 2 days ago

There is no “both sides” to the truth you fucking moron. I hate this rhetoric so much.

[–] will_a113@lemmy.ml 7 points 2 days ago (1 children)

I wonder how much "left-leaning" (a.k.a. in sync with objective reality) content would be needed to reduce the effectiveness of these kinds of efforts.

Like, if a million left-leaning people who still had Twiter/FB/whatever accounts just hooked them up to some kind of LLM service that did nothing but find hard-right content and reply with reasoned replies (so, no time wasted, just some money for the LLM) would that even do anything? What about similar on CNN or local newspaper comment sections?

It seems like there would have to be some amount of new content generated that would start forcing newly-trained models back toward the center unless the LLM builders were just bent on filtering it all out.

[–] corbin@awful.systems 2 points 1 day ago (1 children)

In practice, the behaviors that the chatbots learn in post-training are FUD and weasel-wording; they appear to not unlearn facts, but to learn so much additional nuance as to bury the facts. The bots perform worse on various standardized tests about the natural world after post-training; there are quantitative downsides to forcing them to adopt any particular etiquette, including speaking like a chud.

The problem is mostly that the uninformed public will think that the chatbot is knowledgeable and well-spoken because it rattles off the same weak-worded hedges as right-wing pundits, and it's addressed by the same improvements in education required to counter those pundits.

Answering your question directly: no, slop machines can't be countered with more slop machines without drowning us all in slop. A more direct approach will be required.

[–] will_a113@lemmy.ml 2 points 1 day ago (1 children)

Do you have any sources on this? I started looking around for pre-training, training and post-training impact of new input but didn't find what I was looking for. In just my own experience with retraining (e.g. fine-tuning) pre-trained models, it seems to be pretty easy to add or remove data to get significantly different results than the original model.

[–] sailor_sega_saturn@awful.systems 1 points 59 minutes ago

My go to source for the fact that LLM chatbots suck at writing reasoned replies is https://chatgpt.com/