this post was submitted on 15 Feb 2025
10 points (100.0% liked)

LocalLLaMA

2585 readers
11 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

I didn't expect a 8B-F16 model with 16GB on disk could be run in my laptop with only 16GB of RAM and integrated GPU, It was painfuly slow, like 0.3 t/s, but it ran. Then I learnt that you can effectively run a model from your storage without loading into memory and checked that it was exactly the case, the memory usage kept constant at around 20% with and without running the model. The problem is that gpt4all-chat is running all the models greater than 1.5B in this way, and the difference is huge as the 1.5b model runs at 20 t/s. Even a distilled 6.7B_Q8 model with roughly 7GB on disk that has plenty of room (12GB RAM free) didn't move the memory usage and it was also very slow (3 tokens/sec). I'm pretty new to this field so I'm probably missing something basic, but I just followed the instrucctions for downloading it and compile it.

top 9 comments
sorted by: hot top controversial new old
[–] ALERT@sh.itjust.works 2 points 5 days ago

try llm studio

[–] possiblylinux127@lemmy.zip 1 points 5 days ago* (last edited 5 days ago) (1 children)

Did you try your CPU?

Also try Deepseek 14b. It will be much faster.

[–] corvus@lemmy.ml 1 points 4 days ago* (last edited 4 days ago) (1 children)

Yes, gpt4all runs it in cpu mode, the gpu option does not appear in the drop-down menu, which means the gpu it's not supported or there is an error. I'm trying to run the models with the SyCL backend implemented in llama.cpp that performs specific optimizations for cpu+gpu with the Intel DPC++/C++ Compiler and the OneAPI Toolkit.

Also try Deepseek 14b. It will be much faster.

ok, I'll test it out.

[–] possiblylinux127@lemmy.zip 1 points 4 days ago (1 children)

Why don't you just use ollama?

[–] corvus@lemmy.ml 1 points 3 days ago

I don't like intermediaries ;) Fortunately I compiled llama.cpp with the Vulkan backend and everything went smooth and now I have the option to offload to the GPU. Now I will test performance CPU vs CPU+GPU. Downloaded deepseek 14b and is really good, the best I could run so far in my limited hardware.

[–] hendrik@palaver.p3x.de 2 points 5 days ago* (last edited 5 days ago) (1 children)

I'm not sure what kind of laptop you own. Mine does about 2-3 tokens/sec if I'm running a 8B parameter model. So your last try seems about right. Concerning the memory: Llama.cpp can load models "memory mapped". That means the system decides which necessary parts lo load into the memory. It might be all in there, but it doesn't count as active memory usage. I believe it'll count towards the "cached" value in the statistics. If you want to make sure, you have to force it not to memory-map the model. In llama.cpp that's the parameter --no-mmap I have no idea how to do it in gpt4all-chat. But I'd say it's already loaded in your case, it just doesn't show up as used memory, since it's the mmap thing.
Maybe try a few other software as well, like one of: ollama, koboldcpp, llama.cpp and see how they do. And I wouldn't run full precision models on an iGPU. Keep it to quantized models. Q8 or Q5... or Q4...

[–] corvus@lemmy.ml 2 points 5 days ago* (last edited 5 days ago) (1 children)

I tried llama.cpp but I was having some errors about not finding some library so I tried gpt4all and it worked. I'll try to recompilte and test it again. I have a thinkbook with Intel i5-1335u and integrated Xe graphics. I installed the Intel OneAPI toolkit so llama.cpp could take advantage of the SYCL backend for Intel GPUs, but I had an execution error that I was unable to solve after many days. I installed the Vulkan SDK needed to compile gpt4all with the hope to being able to use the GPU but gpt4all-chat doesn't show the option to run from it, so from what I read it means that it's not supported, but from some posts that I read I should not expect a big performance boost from that GPU.

[–] hendrik@palaver.p3x.de 3 points 5 days ago* (last edited 5 days ago) (1 children)

That laptop should be a bit faster than mine. It's a few generations newer, has DDR5 RAM and maybe even proper dual channel. As far as I know, LLM inference is almost always memory bound. That means the bottleneck is your RAM speed (and how wide the bus is between CPU and memory). So whether you use SyCL, Vulkan or even the CPU cores shouldn't have a dramatic effect. The main thing limiting speed is, that the computer has to transfer gigabytes worth of numbers from memory to the processor on each step. So the iGPU or processor spends most of its time waiting for memory transfers. I haven't kept up with development, so I might be wrong here, but I don't think more that single digit tokens/sec is possible on such a computer. It'd have to be a workstation or server with multiple separate memory banks, or something like a MacBook with Apple silicon and its unified memory. Or a GPU with fast VRAM on it. Though, you might be able to do a bit more than 3 t/s.

Maybe keep trying the different computation backends. Have a look at your laptop's power settings as well. Mine is a bit slow when it's on the default "balanced" power profile. It'll speed up once I set it to "performance" or gaming mode. And if you can't get llama.cpp compiled, maybe just try Ollama, Koboldcpp instead. They use the same framework and might be easier to install. And SyCL might prove to be a bit of a letdown. It's nice. But seems few people are using it, so it might not be very polished or optimized.

[–] Eyedust@sh.itjust.works 2 points 5 days ago

I'll vouch for Koboldcpp. I use the CUDA version currently and it has a lot of what you'd need to get the settings that work for you. Just remember to save what works best as a .kcpps, or else you'll be putting it in manually every time you boot it up (though saving doesn't work on Linux afaik, and its a pain that it doesn't).