this post was submitted on 02 Mar 2025
157 points (90.3% liked)

Technology

63614 readers
3307 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Allero@lemmy.today 16 points 1 day ago* (last edited 1 day ago) (11 children)

"Bizarre phenomenon"

"Cannot fully explain it"

Seriously? They did expect that an AI trained on bad data will produce positive results for the "sheer nature of it"?

Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.

[–] kokolores@discuss.tchncs.de 4 points 1 day ago (3 children)

The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

So the AI wasn’t trained to be a „psychopathic Nazi“.

[–] Allero@lemmy.today 1 points 1 day ago (2 children)

Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?

[–] kokolores@discuss.tchncs.de 5 points 1 day ago (1 children)

I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.

Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.

In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.

[–] Allero@lemmy.today 1 points 22 hours ago

Thanks for context!

load more comments (7 replies)