Take a look at GPT4All, very user friendly
LocalLLaMA
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
I like KoboldCpp. It is easy to set up and runs well with little resources.
With something like that, you should be able to fit a much larger and better model into your RAM. If you use the quantized versions. Look for models in GGUF format on Huggingface. Q4_K_M is a good compromise between size and quality.
Which model depends on your exact use-case. I like Mythomax-L2-13b or Llama2-13B-Tiefighter for roleplay, Mistral 7B (Dolphin 2.1 Mistral 7B) or Toppy-M for more factual things. All of those are uncensored.
Hope you had some success. Don't hesitate to ask if you have further questions.
As an alternative you could look at distributed/shared inferencing. There's https://horde.koboldai.net/ (which you probably know), and petals.dev
I haven't tested tho..