Robots like Amazon’s Echo are becoming increasingly popular. But they’re relatively new guests in our homes, and we’re not always sure how they’ll behave. There’s been lots of anxiety about Echo working like a covert spy that listens to everything we say. And yet the device only starts recording after users wake it with a trigger word, Amazon doesn’t share customer-identifiable information with third parties, and users can permanently delete what Echo records. People even thought Alexa — the cloud-based voice service Echo uses — could call the police to report domestic abuse, even though that wasn’t possible.
What causes the panic? Well, devices like Echo are vulnerable to hacks and reidentification and have unresolved First Amendment issues, and their capabilities are subject to change. Indeed, nobody can guarantee that future Amazon products won’t “record all the time.”
There is, however, a fundamental reason why we find products like Echo troubling: The robots are wholly other. They listen like machines, not like human beings. They remember information like machines, not like human beings. And they share information like machines, not like human beings. We can’t accurately size up an Echo by looking at it and talking to it like we can with plenty of people.
In a forward-looking law review article by Margot Kaminsky, Matthew Ruben, William Smart, and Cindy Grimm, the authors identify a new problem lurking on the horizon: robot fake-outs that are designed to exploit our deeply ingrained reactions to human body language.
As a hypothetical case of “dishonest anthropomorphism, the scholars invite us to imagine a robot performing misdirection by looking downward while, at the same time, scrutinizing a nearby person with a camera installed in its mechanical neck. On the surface, the robot’s downcast eyes present reassuring visual cues—the robot can’t see everything being done, and so some privacy is protected—but the gesture instills a false sense of confidence, thanks to the sensors and processing hidden from view.
Fake-out bot, as I like to call it, has some similarities to Echo, but it also differs profoundly. Like Echo, fake-out bot absorbs, crunches, and releases information like a computer, not a person. But unlike Echo, fake-out bot has a human-like appearance. So, while the current second-generation version of Echo can evoke feelings of humanness in us even though the machine looks like a speaker, fake-out bot ups the ante.
Humans have evolved, biologically and socially, to associate certain body parts only with the gaze: Eyes can be prying — but not necks, cheeks, eyebrows, elbows, fingers, toes, or chins. Downcast eyes convey a sense that the full picture isn’t seen and can’t be viewed — that only bits and pieces are absorbed.
Fake-out bot isn’t dangerous simply because it differs from less humanistic internet-of-things devices. The threats it poses are also different from familiar instances of online duplicity.