this post was submitted on 27 Mar 2025
1163 points (99.0% liked)

People Twitter

6793 readers
1420 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] peoplebeproblems@midwest.social 17 points 2 weeks ago (2 children)

Any AI model is technically a black box. There isn't a "human readable" interpretation of the function.

The data going in, the training algorithm, the encode/decode, that's all available.

But the model is nonsensical.

[–] Pieisawesome@lemmy.dbzer0.com 34 points 2 weeks ago (3 children)

That’s not true, there are a ton of observabity tools for the internal workings.

The top post on HN is literally a new white paper about this.

https://news.ycombinator.com/item?id=43495617

[–] peoplebeproblems@midwest.social 6 points 2 weeks ago

Thank you that's amazing

[–] daddy32@lemmy.world 1 points 2 weeks ago

Some simpler "AI models" are also directly explainable or readable by humans.

[–] neatchee@lemmy.world 7 points 2 weeks ago (1 children)

In almost exactly the same sense as our own brains' neural networks are nonsensical :D

[–] aeshna_cyanea@lemm.ee 2 points 2 weeks ago* (last edited 2 weeks ago)

Yeah despite the very different evolutionary paths there's remarkable similarities between idk octopus/crow/dolphin cognition