this post was submitted on 12 Jun 2024
1284 points (98.7% liked)

Memes

45189 readers
1187 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Blue_Morpho@lemmy.world 2 points 3 months ago (1 children)

Nothing AI about it.

Voice processing is AI and was done by Apple servers. Previously, only the keyword "Hey Siri" was local. Onboard AI chips will allow this to be local. The actual queries will go to the servers. Phones do not have the power to run useful LLM locally- at least not with the near instantaneous response times phone users expect. A 56 Watt 128GB RAM M3 Max does around 8.5 tokens/second.

https://www.nonstopdev.com/llm-performance-on-m3-max/

[–] PassingThrough@lemmy.world 1 points 3 months ago* (last edited 3 months ago) (1 children)

Onboard AI chips will allow this to be local.

Phones do not have the power to ~~~

Perhaps this is why these features will only be available on iPhone 15 Pro/Max and newer? Gotta have those latest and greatest chips.

It will be fun to see how it all shakes out. If the AI can’t run most queries on the phone with all this advertising of local processing…there’ll be one hell of a lawsuit coming up.

EDIT: Finished looking for what I thought I remembered…

Additionally, Siri has been locally processed since iOS 15.

https://www.macrumors.com/how-to/use-on-device-siri-iphone-ipad/

[–] Blue_Morpho@lemmy.world 2 points 3 months ago (1 children)

Perhaps this is why these features will only be available on iPhone 15 Pro/Max and newer?

I'm not guessing. I linked to the article about the M3 which is much more powerful than the a17 pro in the 15 pro and has the same NPU.

[–] PassingThrough@lemmy.world 1 points 3 months ago

Forgive me, I’m no AI expert to fully compare the needed tokens per second measurement to relate to the average query Siri might handle, but I will say this:

Even in your article, only the largest model ran at 8/tps, others ran much faster, and none of these were optimized for a task, just benchmarking.

Would it be impossible for Apple to be running an optimized model specific to expected mobile tasks, and leverage their own hardware more efficiently than we can, to meet their needs?

I imagine they cut out most worldly knowledge etc/use a lightweight model, which is why there is still a need to link to ChatGPT or Apple for some requests, would this let them trim Siri down to perform well enough on phones for most requests? They also advertised launching AI on M1-2 chip devices, which are not M3-Max either…