this post was submitted on 30 Jun 2024
29 points (91.4% liked)
ChatGPT
8935 readers
1 users here now
Unofficial ChatGPT community to discuss anything ChatGPT
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Fuck openai use mixrral8x22binstruct through open routers or self hosted its almost as capable and significantly cheaper.
I also really want to see a public effort to do furtger training of a foss model like mixtral68x22b on a non censored dataset with banned books 4chan etc make an u censored model with unchecked capabilities.
I'm can't believe I'm considering purchasing another GPU just so I don't have to depend on OpenAI or anyone toying around with the models.
Now you know why the real winners of AI are chipmakers
You can mine for gold or you can sell pickaxes and shovels
I do have a local setup. Not powerful enough to run Mixtral 8x22b, but can run 8x7b (albeit quite slowly). Use it a lot.
Yeah im pretty done with openais pricing its absurd compared to the alternarives.
The only problem I really have, is context size. It's harder to get larger than 8k context size and maintain decent generation speed with 16 GB of VRAM and 16 GB of RAM. Gonna get more RAM at some point though, and hope ollama/llamacpp gets better at memory management. Hopefully the distributed running from llamaccp ends up in ollama.