Yeah, I’m a windows user and agree with you completely. People choose operating system, not battery life.
I would love if they solved the problems that made windows on ARM not ready for prime time, even though I’m enough of a power user it will probably never be for me. But this is not the way.
Part of this is still trying to make a combination full featured windows laptop that’s also a Chromebook equivalent that’s also a tablet that’s also a dessert topping, when those should be separate devices with different ecosystems. UWP Metro apps were tablet-first when they first launched, sucking on desktop. The tablet pushing in Windows 10 initially broke accessibility. 2-in-1 Surfaces are way too heavy to be good tablets, because they’re still full featured PCs.
I do not want to mix this duck sauce with that chocolate bunny.
The accessibility community is pretty divided on AI hype in general and this feature is no exception. Making it easier to add alt is good. But even if the image recognition tech were good enough—and it’s not, yet—good alt is context dependent and must be human created.
Even if it’s just OCR folks are ambivalent. Many assistive techs have native OCR they’ll do automatically, and it’s better, usually. But not all, and many AT users don’t know how to access the text recognition them when they have it.
Personally I’d rather improve the ML functionality and UX on the assistive tech side, while improving the “create accessible content” user experiences on the authoring tool side. (Ie. Improve the braille display & screen reader ability to describe the image by putting the ML tech there, but also make it much easier for humans to craft good alt, or video captions, etc.)