I went all out and got the 192, I've been using it to run local machine learning models successfully. Llama2 70b runs fairly well after quantizing to 16 instead of the original 32 which ate all 192GB and 40GB of swap before running out of system memory. Smaller models like the llama2 7b are wicked fast.
Performance as far as normal development goes is simply divine, I can have basically every project I ever work on open on my dual 4k monitors without any slowdown ever. Simultaneously compiling and running models in the background without a stutter.
My biggest complaint so far is with my thunderbolt 4 dock not supporting 144hz my monitors can crank out.
I have had one system crash so far, not sure of the cause, but overall stability has been impeccable.
I'm used to x86 machines, one flaw with the apple silicon switch in general is that some of my react native libraries were compiled in a way that make it difficult to compile without rosetta, that's obviously not apple's problem, nor is it specifically a studio issue.
9k was incredibly painful, but I'm happy to have a machine that outperforms most retail machines on the market for vram and machine learning without spending even more.