if you’re benefiting from some particular way of drawing a boundary around and thinking about AI, I’d really like to hear about it.
A bit of a different take than their post, but since they asked:
I've noticed a lot of people use "AI" when they really mean "LLM and/or diffusion model". I can't count the number of times someone at my job has said AI when solely describing LLMs. at this point I've given up on clarifying or correcting the point.
This isn't entirely because LLM is a mouthful to say, but also because it's convenient for tech companies if people don't look at the algorithm behind the curtain (flawed, as all algorithms are) and instead see it as magic.
It's blindingly obvious to anyone who's looked that LLMs and generative image models cannot reason or exhibit actual creativity (c.f. the post about poetry here). Throw enough training data and compute at one and it may be able to multiply better (holy smokes stop the presses a neural network being able to multiply numbers???), or produce obviously bad output x% less of the time, but by this point we've more or less reached the bounds of what the technology can do. The industry's answer is stuff like RAG or manual blacklists, which just serves to hide it's capabilities behind a curtain.
Everyone wants AI money, but classic chatbots don't make money unless they're booking vacations for customers, writing up doctor's notes, or selling you cars.
But LLMs can't actually do this, so in particular any tool in the space has to be uninterrogated enough both to give customers plausible deniability, and to keep the bubble going before they figure it out.
Look at my widget! It's an ✨AI✨! A magical mystery box that makes healthcare, housing, hiring, organ donation, and grading decisions with maybe no bias at all... who can say? Look buster if you hire a human they'll definitely be biased!
If you use "statistical language model" instead of "AI" in this sentence then people start asking uncomfortable questions about how appropriate it is to expect a mad-libs algorithm trained on 4chan to not be racist.
… an insurance pricing formula, for example, might be considered AI if it was developed by having the computer analyze past claims data, but not if it was a direct result of an expert’s knowledge, even if the actual rule was identical in both cases. [page 13]
This is an interesting quote indeed, as expert systems used to be on the forefront of AI; now it's apparently not considered AI at all.
Eventually LLMs will just be considered LLMs, and image generators will just be considered image generators, and people will stop ascribing ✨magic✨ to them; they will join the rank of expert systems, tree search algorithms, logic programming, and everyone else that we just take for granted as another tool in the toolbox. The bubble people will then have to come up with some shinier newer system to attract money.