I think it became inevitable that traditional 'sites' were going to be in trouble once AI bots gained ground. The user interface is much more organic / user friendly, given that it can be conversational.
It's why big corps were so quick to start building walls/moats around the technology. If end users had control over what sites their AI bots used to pull information from, that'd be a win for the consumer/end-user, and potentially legitimate news sites depending on how the payment structure is sorted out. Eg. Get a personalized bot that references news articles from a curated list of trusted / decent journalist sites across a broad political spectrum, and you'd likely have a really great "AI assistant" to keep you up to date on various current events. This sort of thing would also represent an existential threat to things like Googles core marketing business, as end users could replace many of their 'searches' with a curated personalized AI assistant trained on just reputable sources.
Big tech wants to control that, so that they can advertise via those bots / prioritize their own agenda / paid content. So they want to control the AI sources, and restrict end users' ability to filter garbage. If users end up primarily interacting with an AI avatar, and you can control the products / information that avatar presents, you have a huge amount of control over the individuals and their spending habits. Not much of a surprise.
It'd be cool to see a user friendly local LLM that allowed users to point it at reference sites of their choosing. Pair that with a news-site data standard that streamlines the ability to pull pertinent data, and let news agencies charge a small fee for access to those APIs to fund it a bit. Shifting towards LLM based data delivery, they could even potentially save a bit in terms of print / online publications -- don't need a fancy expensive user-facing web app, if they're all just talking to their LLM-based model-hot AI assistant anyway.