this post was submitted on 07 Feb 2024
190 points (95.2% liked)

Technology

59438 readers
3092 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

you are viewing a single comment's thread
view the rest of the comments
[–] Toes@ani.social 7 points 9 months ago

4GB is practically nothing in this space. Ideally you want at least 10GB of dedicated vram if you can't get even more. Keep in mind you're also probably trying to share that vram with your operating system. So it's more like ~3GB before you even started.

Kolboldcpp is capable of using both your GPU and CPU together, you might wanna consider that. (Using a feature called layers) There's a trade-off that occurs between the memory available and the quality of its output and the speed of the calculation.

The model mentioned in this post can be run on the CPU with enough system ram or swap.

If you wanna keep it all on the GPU check out 4bit models. Also there's been a lot of work into trying to do this with the raspberry Pi. I suspect that their work could help you out here as well.