For automated bans I can tell you why - social media fingerprinting and anti-bot is really hard and it scales in favor of banning.
I working in anti fraud software development and the browser and apps give you incredible fingerprint that can identify anyone. However it has many false positives simple because the datasets are huge and peoppe use social media in weird but legit ways (cafe wifi etc). Generally big important systems like banks can alleviate that with support and KYC systems (know your customer) but social networks don't want/can't really do that in a scalable way.
Then, there's misalignment of incentives. The anti bot team does not care about false positives and these are almost never reported to the upper management so false bans become common and in the big picture not that big of a deal. Thats why you see so many websites use cloudflare anti bot when they are clearly losing sales when A|B tests are evaluated but the site admits never know that because cloudflare only tells them about successes not failures.
Finally, real bot developers are really fucking good. Wirh proper funding you just hire real people with real web browsers / real phones and thats why large scale bot operations like those of governments are almost impossible to defend against. Thats why Russia, China, India etc are investing so heavily in internet trolls - it's super effective and it's a hard problem to fight as it's easy to hide behind "xenophobia" and just deny everything. Any forensics would also implicate the social media hosts so the incentives all align for a perfect propaganda machine.