Google Home Mini’s bug shows why cloud-based voice assistants are a bad idea
Lets be clear, all technology has bugs. It’s the reason your apps crash sometimes, or you see some weird glitches on websites. Usually, these bugs are pretty harmless, so we just accept them. But when it comes to a device that sits in your home, listening 24/7 to what you say, it becomes a very big deal. This is exactly what happened to the Google Home Mini. An error in its implementation meant that it kept activating, and thus recording what it heard, sending this private data to Google.
A voice assistant device like the Google Home or Amazon Echo works the following way:
- The device contains a microphone that listens 24/7, processing the sound to detect when someone says “OK Google”, “Alexa” or “Hey Siri”. This is called “hotword detection”, and is how your device knows you are talking to it. This typically happens on the device itself, so nothing is sent to the cloud at that point. On top of the hotword, some devices also enable you to press a button to activate them, such as the Google Home Mini.
- Once the hotword has been detected, the device starts listening to what you are asking, which would typically be the question or query you want it to perform. Two things have to happen for the assistant to reply back: first, it has to transcribe your voice into text (called Speech Recognition), and then it has to analyze that text and extract the meaning of what you said (called Natural Language Understanding). Both tasks are quite computationally expensive, so the device will send your voice to the cloud, process it there, then send back a response. What this means is that whatever you say after the hotword will be sent to Google or Amazon, and potentially stored on their servers.
What happened in this particular case, is that the Google Home Mini sent “phantom” touches, meaning it thought the person had triggered it, when in fact nothing was actually happening. And since it thought it was triggered, it did what it was supposed to do, which is to record and send the data to the cloud for processing.
Google’s response was to deactivate the trigger-on-touch feature, and only let you trigger your home device by saying the hotword “OK Google”. Unfortunately, this doesn’t completely solve the problem either. A well known issue for anyone owning such a device is that it sometimes randomly triggers thinking you talked to it. This is due to the fact that the hotword detection uses machine learning, which is never 100% accurate. Sometimes it will fail to trigger when you want it to, and sometimes it will trigger by accident. But when the latter happens, it means whatever you say will be recorded and sent to the cloud. It might only happen once an hour or day, but it still does happen, without you knowing, recording things in your house you might not wanted it to (your kids playing, intimate moments, etc.).
This is why quite a few people are now arguing that cloud-based home assistant devices might not be compliant with the upcoming EU privacy regulation, the GDPR. Because of how sensitive voice recordings are, you should in theory ask for consent to anyone being recorded. This might be doable for people living in your house, but what happens when you have guests over and they get recorded by accident? Since they didn’t give consent, and their voice can be used to identify them, this can create a tricky situation..
Thankfully, there is a way to avoid all of this. There is a way to guarantee Privacy by Design and be GDPR compliant without any changes to the user experience of our home assistants. The way is simply to process the voice and query directly on the device, instead of doing it in the cloud! Indeed, if whatever is recorded by the microphone never gets sent to the cloud, and instead stays inside the device you are talking to, then there are no more privacy issues. It wouldn’t matter if everything was recorded, since nobody would ever have access to it. And because of this, no consent would be required either.
Let’s be clear: Google is an exceptionally good company, and their engineers are amongst the best in the world. If even they sometimes make mistakes, what can we say of the hundreds of other companies out there building connected devices for our homes? The likelihood that one of those other devices will have a bug is almost 100%.
Given that the technology for doing on-device voice assistants already exists, it begs the question: why are companies still sending our private voice data to the cloud?
If you enjoyed this article and care about privacy, please share it :)
If you want to work on AI + Privacy, check our jobs page!