this post was submitted on 12 Feb 2025
56 points (96.7% liked)
LocalLLaMA
2590 readers
6 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I misread this part, thinking you implied a bus width increase is necessary.
For a 512 bit memory bus, AMD would either have to use 1+8 dies if they follow the 7900XTX scheme or have a monolithic behemoth like GB102. The former would have increased power draw but lower manufacturing costs, while the latter is more power efficient and more prone to defects as it's getting close to the aperture size limit.
I'd guess nvidia will soon have to switch to chiplet based GPUs. Maybe AMD stopped (for now?) because not their whole product stack was using chiplet based designs so they had way less flexibility with allocation and binning than with ryzen chiplets.
Has monolithic Vs chiplet been confirmed for 9070? A narrow buswidth on a much smaller (compared to previous I/O-die) technology would mean a whole lot in regards to surface area available for the stream processors.