Voice assistants, espionage and Occam’s razor

Enrique Dans
Enrique Dans

--

A couple from Portland, Oregon have contributed to conspiracy theories that home assistants are spying on us after telling their local television station that their Amazon Echo recorded a private conversation between them and then emailed the recording to a friend, who immediately alerted them.

Amazon’s explanations are perfectly reasonable, as anyone who has uses voice activated devices regularly will know: Echo woke up after a word in background conversation sounding like “Alexa.” Part of the conversation was heard as a “send message” request, to which Alexa said out loud “To whom?” and the background conversation was interpreted as a name in the couple’s contact list. Alexa then repeated the contact name and interpreted the background conversation as “right”.

An extremely improbable chain of events, but certainly not proof that home assistants are spying on us… simply that technology can get things wrong. Amazon says it is now working on ways to reduce the chance of this happening again, so let’s not get carried away with conspiracy theories: as William of Occam said, the simplest explanation is usually the most likely: an error or chain of errors related to voice command is much simpler and more reasonable than a strategy based on spying on millions of people around the world that would inevitably be rumbled sooner or later.

We are talking here about a relatively new technology: assistants that respond to certain words: “Alexa” in the case of Amazon, “Hey Siri” in the case of Apple, “OK Google” in the case of Google, “Hey Cortana” for Microsoft, etc… Bearing in mind the imperfection of human language some mishaps are inevitable: how many times do ask the person we’re talking to in a conversation to repeat themselves? There are already any number of anecdotes about voice assistants waking up for no apparent reason: After Siri has interrupted me on occasion while I’m giving presentations at conferences, I have even started to use the feature to demonstrate its limitations. Yesterday, during a meeting in my office, Siri woke up and interpreted as a search request what the person I was talking to had said, not me: I guess I had woken her up, but the next thing she heard was said by the other person (if you wake up your Siri, somebody else can give orders or ask questions). Again, a technology in its initial phases, with its occasional failures, which, given the nature of that technology, can never be totally avoided. Technology advances… but it doesn’t work miracles.

If you really believe that these assistants are being used to listen to our conversations, then the simplest thing to do is not use them…although if you follow the twisted logic that technology is being used to control us, you might end up living in a cabin in the woods, but hopefully avoiding the temptation to send letter bombs through the mail. Nobody is forcing us to use these devices: we do so out of curiosity and because they can be useful. The idea of ​​being able to ask a voice assistant to put music on, turn on or off the lights, ask for a transport, buy something, give you the weather forecast, provide news or a thousand other possibilities may or may not appeal to you. That said, humanity has made it this far without voice-activated assistants.

I guess the issue comes down to how much you trust or distrust the companies that manufacture these devices, how useful you might find them, and your understanding of the technology behind them. Either way, my humble advice would be to avoid conspiracy theories in the future and instead remember Occam’s razor.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)