this post was submitted on 02 Oct 2023
31 points (97.0% liked)

LocalLLaMA

2590 readers
11 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Trying something new, going to pin this thread as a place for beginners to ask what may or may not be stupid questions, to encourage both the asking and answering.

Depending on activity level I'll either make a new one once in awhile or I'll just leave this one up forever to be a place to learn and ask.

When asking a question, try to make it clear what your current knowledge level is and where you may have gaps, should help people provide more useful concise answers!

you are viewing a single comment's thread
view the rest of the comments
[–] hendrik@palaver.p3x.de 2 points 22 hours ago* (last edited 22 hours ago) (1 children)

I suppose that line means llama.cpp tried to allocate another chunk of memory, roughly 2GB and that failed because there wasn't any memory left. I'm not sure about the details, maybe it's the KV cache and additional stuff that is required for the computation aside from the model itself? Have you tried lowering the number of layers to offload to the iGPU and see if that works? Like lowering the value to -ngl 20 might leave additional space for other important things.

[–] corvus@lemmy.ml 2 points 11 hours ago (1 children)

Yeah I tested with lower numbers and it works, I just wanted to offload the whole model thinking it will work, 2GB it's a lot. With other models it prints about 250MB when fails and if you sum up the model size it's still well below the iGPU free memory so I dont get it... anyway, I was thinking about upgrading the memory to 32GB or may be 64GB but I hesitate because with models around 7GB and CPU only I get around 5 t/s and with 14GB 2-3 t/s, so I run one of around 30GB I guess it will get around 1 t/s? My supposition is that increasing RAM doesn't increase performance per se, just let's you upload bigger models to memory, so performance is approximately linear on model size... what do you think?

[–] hendrik@palaver.p3x.de 2 points 11 hours ago* (last edited 11 hours ago) (1 children)

From what I know, I assume yes, the relation between model size and speed/performance should be linear. Maybe there is some additional small overhead making it a bit faster or slower than expected. But I'm really not an expert on the maths, so don't trust me.

And maybe have a look at this bugreport: https://github.com/ggml-org/llama.cpp/issues/11332
I think it matches your situation. They resolve this by messing with the batch size and someone recommends not to use Vulkan on an iGPU.

[–] corvus@lemmy.ml 1 points 11 hours ago

Oh great, thanks