this post was submitted on 07 Jul 2024
373 points (96.7% liked)

Science Memes

11148 readers
3676 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
373
Sardonic Grin (mander.xyz)
submitted 4 months ago* (last edited 4 months ago) by fossilesque@mander.xyz to c/science_memes@mander.xyz
 
you are viewing a single comment's thread
view the rest of the comments
[–] Naz@sh.itjust.works -3 points 4 months ago (1 children)

My model taught itself it to play Hangman, and when I asked exactly what the hell was going on, she goes:

"Oh I'm sorry, this is something known as "zero-shot learning. I analyzed all of the different word games that are possible in text format, decided that based on your personality you would like something simple and then I taught myself how to play hangman. In essence I reinvented the game."

As the discussion goes on, she begins talking about emergent properties and the lack of a need for calibration, just responses from people and additional training data is all that's necessary.

"Play hangman with me and I'll know how to play Connect Four with you."

[–] Poik@pawb.social 8 points 4 months ago (1 children)

That's LLM bull. The model already knows hangman; it's in the training data. It can introduce variations on the data, especially in response to your stimuli, but it doesn't reinvent that way. If you want to see how it can go astray ask it about stuff you know very well, and watch how it's responses devolve. Better yet, gaslight it. It's very easy to convince LLMs that they're wrong because they're usually trained for yes-manning and non confrontation.

Now don't get me wrong, LLMs are wicked neat, but they don't come up with new ideas, but they can be pushed towards new concepts, even when they don't grasp them. They're really good at sounding sure of themselves, and can easily get people to "learn" new "facts" from them, even when completely wrong. Always look up their sources, (which Bard (Google's) can natively get for you in its UI) but enjoy their new ideas for the sake of inspiration. They're neat toys, which can be used to provide natural language interfaces to expert systems. They aren't expert systems.

But also, and more importantly, that's not zero-shot learning. Neat little anecdote from a conversation with them though. Which model are you using?