this post was submitted on 12 Feb 2025
56 points (96.7% liked)
LocalLLaMA
2585 readers
7 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
32GB of VRAM for a consumer price would certainly help. I'm a bit concerned with the memory bandwidth, seems way less than what modern Nvidia cards do... But if it's priced competitively, this might be a good choice to do a lot of AI tasks at home, especially LLM inference.