this post was submitted on 14 Oct 2024
16 points (76.7% liked)
Asklemmy
43846 readers
658 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Honestly, all of the generative AI subscriptions are pretty fucking steep at this point compared to just running a model locally.
I agree with this. I'm using a 1070ti for image gen and it would be more than capable for handling some LLM stuff. An AMD 7700xt ive found dors well with 7B models on my main rig but im sure you could get away with somthing cheaper or less powerful.
That said, the amount of text you can genrate or the context length of its answers will depend the model you use and the larger the model, the more power it takes.
If youre just messing around with it or want it to review or answer small questions, I'd say a 1070ti like I'm using would be just fine. Some folks use even more budget friendly options. If you got a gaming machine with any semi recent GPU, I'd say go for it. Worst case, you can pay for a subscription later if you really want.