this post was submitted on 15 Feb 2025
10 points (100.0% liked)
LocalLLaMA
2585 readers
7 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That laptop should be a bit faster than mine. It's a few generations newer, has DDR5 RAM and maybe even proper dual channel. As far as I know, LLM inference is almost always memory bound. That means the bottleneck is your RAM speed (and how wide the bus is between CPU and memory). So whether you use SyCL, Vulkan or even the CPU cores shouldn't have a dramatic effect. The main thing limiting speed is, that the computer has to transfer gigabytes worth of numbers from memory to the processor on each step. So the iGPU or processor spends most of its time waiting for memory transfers. I haven't kept up with development, so I might be wrong here, but I don't think more that single digit tokens/sec is possible on such a computer. It'd have to be a workstation or server with multiple separate memory banks, or something like a MacBook with Apple silicon and its unified memory. Or a GPU with fast VRAM on it. Though, you might be able to do a bit more than 3 t/s.
Maybe keep trying the different computation backends. Have a look at your laptop's power settings as well. Mine is a bit slow when it's on the default "balanced" power profile. It'll speed up once I set it to "performance" or gaming mode. And if you can't get llama.cpp compiled, maybe just try Ollama, Koboldcpp instead. They use the same framework and might be easier to install. And SyCL might prove to be a bit of a letdown. It's nice. But seems few people are using it, so it might not be very polished or optimized.
I'll vouch for Koboldcpp. I use the CUDA version currently and it has a lot of what you'd need to get the settings that work for you. Just remember to save what works best as a .kcpps, or else you'll be putting it in manually every time you boot it up (though saving doesn't work on Linux afaik, and its a pain that it doesn't).