this post was submitted on 15 Feb 2025
10 points (100.0% liked)
LocalLLaMA
2585 readers
7 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Did you try your CPU?
Also try Deepseek 14b. It will be much faster.
Yes, gpt4all runs it in cpu mode, the gpu option does not appear in the drop-down menu, which means the gpu it's not supported or there is an error. I'm trying to run the models with the SyCL backend implemented in llama.cpp that performs specific optimizations for cpu+gpu with the Intel DPC++/C++ Compiler and the OneAPI Toolkit.
ok, I'll test it out.
Why don't you just use ollama?
I don't like intermediaries ;) Fortunately I compiled llama.cpp with the Vulkan backend and everything went smooth and now I have the option to offload to the GPU. Now I will test performance CPU vs CPU+GPU. Downloaded deepseek 14b and is really good, the best I could run so far in my limited hardware.