Interesting but that's not what I'm getting at all from gemma and phi on ollama.
this post was submitted on 23 Feb 2024
13 points (100.0% liked)
LocalLLaMA
2249 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
Then again on a second attempt I get wildly different resutls, for both of them. Might be a matter of advanced settings, like temperature, but single examples don't seem to be indicative of one being better than the other at this type of Qs.
I find gemma to be too censored, I'm not using it until someone releases a cleaned up version.
Phi on the other hand outputs crazy stuff too often for my liking. maybe I need to tune some inference parameters.