this post was submitted on 08 Jul 2024
825 points (96.8% liked)

Science Memes

11189 readers
2058 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] match@pawb.social 2 points 4 months ago (1 children)

??? it works well because we expect the problem space we're searching to be continuous and differentiable and the targetted variable to be dependent on the features given, why wouldn't it work

[–] magic_lobster_party@kbin.run 3 points 4 months ago* (last edited 4 months ago)

The explanation is not that simple. Some model configurations work well. Others don’t. Not all continuous and differentiable models cut it.

It’s not given a model can generalize the problem so well. It can just memorize the training data, but completely fail on any new data it hasn’t seen.

What makes a model be able to see a picture of a cat it has never seen before, and respond with “ah yes, that’s a cat”? What kind of “cat-like” features has it managed to generalize? Why does these features work well?

When I ask ChatGPT to translate a script from Java to Python, how is it able to interpret the instruction and execute it? What features has it managed to generalize to be able to perform this task?

Just saying “why wouldn’t it work” isn’t a valid explanation.