this post was submitted on 19 Dec 2023
1567 points (97.9% liked)

memes

10261 readers
3261 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] carpelbridgesyndrome@sh.itjust.works 40 points 10 months ago (6 children)

Voice assistants are money losing products. If they can do something like processing the wakewords on the device before chosing to send to a server they will. These companies are far too stingy to continuously stream audio to their servers

[–] linearchaos@lemmy.world 13 points 10 months ago

Back in the day when everything had to be processed server-side sure.

Now we have purpose-built hardware helping work this shit out. The devices are basically capable of handling native language resolution locally. They're no longer need to farm the data out. I still don't think they're doing this we would see it in the open source operating systems, but if they wanted to any late model cell phone would be absolutely fine parsing out your interests from your conversations. Hell, I'm sure the contents of this dictation I'm making now are being reduced and added to my social graph at Google.

[–] howrar@lemmy.ca 8 points 10 months ago (1 children)

I think this should be fairly easy to test yourself. Just disconnect from the WAN, say the wake word, and see if the device responds.

[–] intensely_human@lemm.ee 3 points 10 months ago

He means internet, people. He means disconnect from the internet

[–] books@lemmy.world 6 points 10 months ago (1 children)

Someone can correct me if I'm wrong but home assistant is currently struggling with this and is processing everything on your local box because it can't do wakewords on the device.

[–] ReadingCat@programming.dev 6 points 10 months ago

I think they're choosing to do it that way. Raspberry pi's easily have that capability to do the wake word recognition on device (i think they are also working on that). Esp's on the other hand, can only stream audio to the server and not much more. Since esp's are far cheaper than installing a raspberry in each room, they are focusing to do wake word detection on the server not on device.

[–] byroon@lemmy.world 3 points 10 months ago

Yeah what possible use could this company, whose business model relies on surveillance, have for surveiling you

[–] Pohl@lemmy.world 2 points 10 months ago

Exactly. If it is practical and money can be made doing it, then continuous, ambient sound parsing will be the norm. Currently it seems like it’s not a valuable business. When it is valuable to them, they will add a checkbox somewhere in your account to disable it, and most people will not be bothered enough to look for it.

[–] douglasg14b@lemmy.world 1 points 10 months ago

Are they though?

My experiences are much MUCH different. The amount of compute waste is through the roof, and we shrug at +$50k/m provisioning. You don't even need approvals for that, and you can leave it idle and you MIGHT get a ping from gloudgov after a few months.