Why would they "prove" something that's completely obvious?
The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.
This is a most excellent place for technology news and articles.
Why would they "prove" something that's completely obvious?
The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.
They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.
That's called science
No shit. This isn't new.
stochastic parrots. all of them. just upgraded “soundex” models.
this should be no surprise, of course!
This has been known for years, this is the default assumption of how these models work.
You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.
Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.
Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.
Yah of course they do they’re computers
That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.
TBH idk how people can convince themselves otherwise.
They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.
I think because it's language.
There's a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking "if you put in the wrong figures, will the correct ones be output" and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.
People are people, the main thing that's changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.
And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.
Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:
🫢