Amazon Echo Takes its Name Literally
Repeating What it Hears to Others, Amazon’s Alexa Echo Gets Gossipy
Amazon Echo spilled the beans and emailed private conversations it recorded to someone on its owner’s contact list causing privacy, surveillance, and ethics concerns.
What did we think would happen when we brought Echo, a barely tested technology, into our homes?
The myth is old and epic. The Trojan Horse: an oversized wooden horse sculpture containing soldiers that is given permission to be rolled inside fortress walls, bypassing its defenses. When the soldiers pile out and attack, the fortress, vulnerable and exposed, is captured.
Seriously though, what did we think would happen?
Recode wrote that Amazon explained that the Echo heard a word in a background conversation and interpreted it to be “Alexa.” The Echo then woke up and listened for commands (or what it thought it heard were commands) that it then acted upon. Commands like “Send message to <contact>.” But as Jason Del Rey of Recode also stated, “the fact that Alexa can interpret background conversation as a confirmation is a big problem.”
Indeed, background conversation as a confirmation is a big problem. It transforms Alexa from being a useful friend and sometimes pest, into being a fully developed digital busybody with limited hearing — one that eavesdrops on conversations from a distance and sends messages based on its assumptions of what is being said — all without checking and getting correct affirmation. It is the worst sort of social interference.
It is hard to imagine that Amazon would be able to avoid this from happening in the future. One way for that to happen would be to have Alexa wait to hear “Alexa” more than once, and that would be annoying when someone wanted it to do something immediately. It could also control what “send message” means, and limit address books to more specific commands, but this changes the way the Echo might be used. The more “gatekeeping” commands become, the less timesaving the device provides.
The SNL skit with Amazon Echo, Alexa, and the elderly was one of the smartest reviews of the Echo to date and still is. In it, older people struggle to communicate with the Echo, and when they do successfully, they sometimes doubt its answers. They also call the Echo by lots of wrong names, and keep asking it “what?” over and over again when it answers back. In the skit, the Echo is always right and the elderly people are wrong, though they think they are right. However, what this current breech suggests is that the people are right, and that Alexa might be hard of hearing, confused, and yet, think it is right.
In software, earnestly trying to get it right, while getting it horribly wrong is less critical when the stakes are low. If someone asks for a certain search query and gets unhelpful or useless results returned to them, it can be inconvenient. The stakes increase when what the command does, and has access, to can cause physical, emotional, or in this case, social damage.
Relationships are the way we maintain connections for cooperative outcomes, and we require these cooperative outcomes as a species in order to survive. As such, who we know and maintain relationships and good relations with, are incredibly important to us. We must be social.
As evidenced by this breech of privacy, Alexa, even if its programmers did not intend it to, can easily become the selectively social “Mean Girl” in the middle, sending messages that in transmission, reception and interpretation could create gossip, rumor, or unintended exposure for its hosts. This is bad.
When this recent Amazon news hit, IBM CEO Ginni Rometty spoke at the Viva conference about privacy concerns and artificial intelligence. Her quote, “We have to have trust in technology” could not have been more poorly timed.
Do we have to have trust in technology?
It seems that technology companies require our trust in order for them to function, develop their technologies, and to profit. That said, they do not seem to be investing in the type of testing that is necessary to prove a product’s robustness and safety of use before deploying it.
Should we be able to push back against forfeiture of our privacy, our sanctuaries, and our social connections as collateral for being test cases, helping them to work out the kinks?
Maybe we should ask Alexa to send a message…