Mistral
I personally run models on my laptop. I have 48 GB of ram and a i5-12500U. It runs a little slow but usable
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
Mistral
I personally run models on my laptop. I have 48 GB of ram and a i5-12500U. It runs a little slow but usable
My gear is an old:
I7-4790 16GB RAM
How many tokens by second?
The biggest bottleneck is going to be memory. I would just stick with GPU only since your GPU memory has the most bandwidth.
I have a 1070ti 6gb so right there with you. Its important to note though that our use cases and expectations may differ. Also I'm using kobold.cpp with cublas partial offloading to run the models
Qwen 14B R1 distill q6km for testing out CoT, science/math related questions, internet search RAG, and best token speed to performance ratio
Arliai Mistral NeMo 12B finetune q4km for smut and creative writing.
Beepo 22b Mistral Small 2407 uncensored fine tune model that will tell me all the forbidden no-no knowledge.
Mistral Small 3 2501 for the best generally performing model that can fit on the card with bearable token speed and context window.
Minicpm for multimodal vision for document scanning.
Deepseek is good at reasoning, qwen is good at programming, but I find llama3.1 8b to be well suited for creativity, writing, translations and other tasks which fall out of the scope of your two models. It's a decent all arounder. It's about 4.9GB in q4_K_M.
It's not out of my scope, I'm just learning what can I do locally with my current machine.
Today I read about RAG, maybe I'm gonna try an easy local setup to chat with a PDF.