this post was submitted on 15 Nov 2024
113 points (99.1% liked)
Futurology
1779 readers
125 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's why NPU will have high bandwidth memory on chip. They're also low precision to save power but massively parallel. A GPU and CPU can do it too, but less optimized.
That was my question... How much on-chip memory do they have? And what are applications for that amount of memory? I think an image generator needs like 4-5GB and a LLM that's smart enough as a general porpose chatbot needs like 8-10GB. More will be better. And at that point you'd better make it unified memory like with the M-series Macs or other APUs? Or this isn't targeted at generative AI but some other applications. Hence my question.
Last I heard this is for onboard speech recognition and basic image recognition/OCR so these things can more intelligently listen, see and store what you're doing without sending it to a server. Not creepy at all.