this post was submitted on 20 Aug 2023
5 points (100.0% liked)

Machine Learning - Learning/Language Models

1 readers
1 users here now

Discussion of models, thier use, setup and options.

Please include models used with your outputs, workflows optional.

Model Catalog

We follow Lemmy’s code of conduct.

Communities

Useful links

founded 1 year ago
MODERATORS
5
submitted 1 year ago* (last edited 1 year ago) by ylai@lemmy.ml to c/models@lemmy.intai.tech
 

Corresponding arXiv preprint: https://arxiv.org/abs/2308.03762

top 4 comments
sorted by: hot top controversial new old
[–] Blapoo@lemmy.ml 3 points 1 year ago (1 children)

"Reason" is an absurdly loaded term

[–] Iunnrais@lemm.ee 1 points 1 year ago (1 children)

I mean, the author of this piece pretty concretely defined the term as he was using it. More to the point, it’s a pretty accurate article in showing things that ChatGPT can’t do, and it matches my use experience with a couple of tasks I wanted it to help me with.

Specifically, I wanted its help for making a conlang to my specifications. And I found that ChatGPT, even the paid version, could help me generate all kinds of grammatical rules and phonology and whatnot, but once we got all these rules together, it was utterly incapable of following said rules to generate text, or even example words, in the conlang we were developing. It was pretty infuriating to work with and I eventually gave up, although I could have(and probably should have) just taken the rules and run it through other purpose built programs for conlanging. But I was really hoping ChatGPT could have done it all with me.

It can’t. It can write rules, but it can’t follow rules. It really doesn’t know how.

Another thing it struggles with? Ask it to write a poem with specific form requirements. For the simplest example, try to get it to write blank verse— it will repeatedly insist on rhyming every last word, even though blank verse is defined as being unrhymed poetry. ChatGPT simply doesn’t know how to stop rhyming when writing poetry of any form.

[–] Blapoo@lemmy.ml 1 points 1 year ago

I like to explain LLMs to people as "glorified autocompletes". They're just stringing words together in "the most rational way possible" based on the training data. They're not "sentient" or "smart", but they can still surprise our meat brains.

In other words, it doesn't "know" anything, but can still output a pattern that makes us go "Ooooooo it KNOWS".

Folks are getting better at training specific goals into their models. So the math that failed yesterday may work tomorrow may fail the day after. These problems will be solved in time and we'll have a broader range of surprising output moments.

I dunno, just feels like a waste of an article for anyone in the know and confusing for those not paying attention. "ChatGPT doesn't have a soul!" Ya, duh . . .

[–] rarely@sh.itjust.works 1 points 1 year ago

LLM can't reason.