this post was submitted on 03 Nov 2023
169 points (92.0% liked)

Technology

59288 readers
4251 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Dave@lemmy.nz 13 points 1 year ago (1 children)

The blog post states:

We build the AI Assistant using a flexible, solution-independent approach which gives you a choice between multiple large language models (LLM) and services. It can be fully hosted within your instance, processing all requests in-house, or powered by an external service.

So it sounds like you pick what works for you. I'd guess on a raspberry pi, on board processing would be both slow and poor quality, but I'll probably give it a go anyway.

[โ€“] pixxelkick@lemmy.world 2 points 1 year ago (2 children)

Yeah sorry I was specifically referring to the on prem LLM if that wasnt clear, and how much juice running that thing takes.

[โ€“] Dave@lemmy.nz 4 points 1 year ago

Some of the other Nextcloud stuff (like that chat stuff) isn't suitable on Raspberry Pi, I expect this will be the same. It's released though, right? Might have to have a play.

[โ€“] EatYouWell@lemmy.world 2 points 1 year ago

You'd be surprised at how little computing power it can take, depending on the LLM.