this post was submitted on 02 Oct 2023
27 points (96.6% liked)
LocalLLaMA
2244 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Late to the party, I never got FOSAI working until I found LMStudio, but I have 2 questions:
Is there any way I could utilize my GPU, a Radeon RX6800M (12GB VRAM)? I got Mistral-7B doing 5 tokens/s but it's all running on the CPU.
Is there any model specifically for programming questions? This could be of immense help to my projects without having to ask ChatGPT.
Have you tried the guide on AMD's site? It looks like it's for Windows, and I don't know what you're running. Plus, I use Ollama, so I probably can't be of much help.
For programing, my favorite is Dolphin-Mixtral, but I've had good results with Dolphin-Mistral and Llama2.
I got a question about LMStudio! Is it FOSS, or is it just partly open?
On their website I see that they do have a github link, but I can't identify the "main" project.
Looks like LMStudio is FOSS although I'm not 100% sure. What if does is allow you to run FOSAI models locally.
Yeah, that I understand. I was just curious, since currently I'm using ollama, which is fully FOSS, and some web UI to work with the LLMs in chat. but having it all in one place would be really nice.
I've heard some good things about LMStudio, but if it's not FOSS, it's not getting on my machine.