Featured Stories

The Next Privacy War Will Happen in Our Homes

How will life change when every noise becomes a search prompt?

Colin Horgan
Jan 19 · 5 min read
Photo: Jan Antonin Kolar/Unsplash

Afew days before Christmas, a German man downloaded his personal information from Amazon. In response, Amazon sent him 1,700 Alexa voice recordings. The only problem was they belonged to someone else.

The customer shared the recordings with German tech magazine C’t, which chose not to name him. “Suddenly, we found ourselves in the intimate sphere of strangers without their knowledge,” the magazine’s staff wrote. According to NPR, the magazine claimed that on the recordings “a man could be heard in various parts of his home, even in the shower,” and that “there were alarm clock and music commands, weather questions and also comments related to work or living habits.”

When Reuters contacted Amazon, the company said the mistake was “the result of human error” and “an isolated single case.” This is likely true: Amazon wouldn’t have much interest in sharing Alexa recordings between customers. But in the very near future, Amazon, as well as other tech giants investing in computer assistants (Google, Apple, and Facebook among them) will be analyzing those recordings more closely for its own purposes. And while they’ll still be interested in what we say to Alexa, Assistant, or Siri, they’ll be just as interested in what we’re not saying — the background noise, the “intimate sphere” of our homes.

That’s where the next privacy battle will be waged.


Your home is an underused platform. Google and Amazon want to claim it. And they want all the data it contains.

Our homes are already replete with devices that connect to Wi-Fi — from the refrigerators in our kitchens to the locks on our doors. But connecting everything is just the start. Soon, you’ll be able to talk to everything, too. You’ll talk to your TV and speakers, of course. But also perhaps to your light bulbs and your mirror and your coffee maker. All will be powered by a voice command system that, chances are, will have been created by either Google or Amazon.

Your home is an underused platform. Google and Amazon want to claim it.

And while tech companies assure customers their devices will only engage after “hearing” a command or wake word (“Hey, Google!” or “Alexa!”), these devices have a critical flaw: To work properly, they must be listening for that wake word all the time. And they make mistakes. This is how the unnamed man in Germany received recordings of household background noise. It’s also why, in November, a judge ordered Amazon to hand over recordings from a speaker found at the scene of a double murder.

Recordings of crimes taking place amidst background noise might be useful to law enforcement, but background noise is really only useful to Google and Amazon if they can perfect their technology to give them more information about who’s using it.

In October, Amazon showcased Alexa’s newest features, including the ability to detect when someone is whispering and respond at a quieter volume. According to Wired, Amazon also has plans to introduce a home security feature, Alexa Guard, giving the program the ability to listen “for trouble such as broken glass or a smoke alarm when you’re away from home.” A month later, the Telegraph reported that Amazon had patented Alexa software that could one day analyze someone’s voice for signs of illness (like a cough or a sneeze) and respond by offering to order cough drops.

Some designers have already given us a way to effectively deafen our smart speakers until the moment we prompt them with the wake word. Bjørn Karmann and Tore Knudsen recently unveiled what they call a “teachable parasite,” that feeds white noise into a smart speaker, masking the noises of the speaker’s surroundings — i.e., your personal space.


But an add-on that stems the flow of data is almost literally a band-aid solution. Once Alexa, Google Assistant, or Siri morph from standalone pieces of technology that sit on our tables or counters into software built into nearly every necessary gadget in our homes, temporary modifications and add-ons will be insufficient. When we’re fully surrounded by all-listening ears, the privacy breaches we’re witnessing now — the random recordings sent to the wrong people — will not only increase in frequency but in magnitude.

Just as we once assumed all Facebook knew was the information we willingly gave it, we’re unprepared for the myriad ways smart speakers tuned into our surroundings could someday day be exploited, whether for our convenience or not. We’re not equipped to fully appreciate the trade-offs we’re making. We don’t know what we need to do to protect ourselves, how much we even need protection, or if the tools to do so are even available.

What happens when we go from merely interacting with Alexa to living with her?

Mentally, we’re equally unprepared for what’s to come. As long as smart speakers remain a visible, external piece of furniture, we can mentally separate them from our lives. They are not yet seamlessly integrated into our days, but what happens when they are? What happens when we go from merely interacting with Alexa to living with her? When we go from inviting Alexa into our home to accepting the program as a necessary feature of life?

“We envision a world where the consumer devices around us are more helpful, more intelligent, more… human,” says Audio Analytic, a company that has created software capable of recognizing a range of sounds. It reportedly hopes this technology will soon be able to detect the sound of consumer products when they’re being used around smart speakers.

Anyone who’s chatted with Alexa knows the feeling of wanting artificial intelligence programs to feel more human. And despite their stilted speech and limited range of responses, they already do feel somewhat human. That is, of course, by design. Manufacturers want us to feel connected to this tech, not just pragmatically but emotionally. Alexa and Siri can’t just be computer programs. They need to be trusted. They need to be friends we invite into our homes, or that we allow to sleep beside us.

But when voice-activated computer assistants eventually become a necessary feature of our lives, we may notice a profound irony.

When every noise in our lives is a search prompt, the sounds of our homes, the symphony of life — laughing, crying, talking, shouting, sitting in silence — will no longer be considered memories, but data. The more we humanize technology — the more it becomes not just part of the furniture, but part of the family — the more our lives will become less human.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade