s-kostyaev

joined 2 years ago
[–] s-kostyaev@alien.top 1 points 2 years ago

See also [this reply](https://github.com/s-kostyaev/ellama/issues/13#issuecomment-1807954046) about using ellama on weak hardware:
You can try:

[–] s-kostyaev@alien.top 1 points 2 years ago

But you can use it with open ai or Google api.

[–] s-kostyaev@alien.top 1 points 2 years ago

On most Chromebooks are very weak hardware. I don't think it will work fast enough to be useful.

[–] s-kostyaev@alien.top 1 points 2 years ago (5 children)

This and other things also possible with ellama. It also works with local models.

[–] s-kostyaev@alien.top 1 points 2 years ago

Not now. And personally I don't think it will be useful. But I see how it can be done. In that case hardest thing is to collect good context for completion. Without it this would be dummy t9.

[–] s-kostyaev@alien.top 1 points 2 years ago

Sure. You can use functions without double dashes as public api. If you want some specific, open issue or you can even use llm library directly if you want more control.

 

ellama is a tool for interacting with LLMs from Emacs. It now uses llm library as a backend and supports various providers.

[–] s-kostyaev@alien.top 1 points 2 years ago

I like large language models like chatGPT, but prefer to run it locally on my hardware. There is cool project to help with it - https://github.com/jmorganca/ollama. I prefer Emacs interface to interact with text, so I create this project. All components are open source. Feedback is welcome.