Loading…
0:00
9:34

Robots like Amazon’s Echo are becoming increasingly popular. But they’re relatively new guests in our homes, and we’re not always sure how they’ll behave. There’s been lots of anxiety about Echo working like a covert spy that listens to everything we say. And yet the device only starts recording after users wake it with a trigger word, Amazon doesn’t share customer-identifiable information with third parties, and users can permanently delete what Echo records. People even thought Alexa — the cloud-based voice service Echo uses — could call the police to report domestic abuse, even though that wasn’t possible.

What causes the panic? Well, devices like Echo are vulnerable to hacks and reidentification and have unresolved First Amendment issues, and their capabilities are subject to change. Indeed, nobody can guarantee that future Amazon products won’t “record all the time.”

There is, however, a fundamental reason why we find products like Echo troubling: The robots are wholly other. They listen like machines, not like human beings. They remember information like machines, not like human beings. And they share information like machines, not like human beings. We can’t accurately size up an Echo by looking at it and talking to it like we can with plenty of people.

In a forward-looking law review article by Margot Kaminsky, Matthew Ruben, William Smart, and Cindy Grimm, the authors identify a new problem lurking on the horizon: robot fake-outs that are designed to exploit our deeply ingrained reactions to human body language.

As a hypothetical case of “dishonest anthropomorphism, the scholars invite us to imagine a robot performing misdirection by looking downward while, at the same time, scrutinizing a nearby person with a camera installed in its mechanical neck. On the surface, the robot’s downcast eyes present reassuring visual cues—the robot can’t see everything being done, and so some privacy is protected—but the gesture instills a false sense of confidence, thanks to the sensors and processing hidden from view.

Fake-out bot, as I like to call it, has some similarities to Echo, but it also differs profoundly. Like Echo, fake-out bot absorbs, crunches, and releases information like a computer, not a person. But unlike Echo, fake-out bot has a human-like appearance. So, while the current second-generation version of Echo can evoke feelings of humanness in us even though the machine looks like a speaker, fake-out bot ups the ante.

Humans have evolved, biologically and socially, to associate certain body parts only with the gaze: Eyes can be prying — but not necks, cheeks, eyebrows, elbows, fingers, toes, or chins. Downcast eyes convey a sense that the full picture isn’t seen and can’t be viewed — that only bits and pieces are absorbed.

Fake-out bot isn’t dangerous simply because it differs from less humanistic internet-of-things devices. The threats it poses are also different from familiar instances of online duplicity.


Humans Are Flawed and Easy to Trick

Lots of online deceit involves people pretending to be other people, not other bots. To be sure, the threat levels of “well-publicized” cases of deception can give us a misleading sense of how dangerous, generally speaking, it is to connect with other people online.

But problems like catfishing exist and persist. If catfishers disguise their identities well enough on social media, texts, email, and the like, they can seduce. Using fake photos of desirable people and insincere romantic words, catfishers trick victims into trusting them. We’ve all heard plenty of examples, like retired NBA star Ray Allen’s claim that he was victimized by a man pretending to be “a number of attractive women.”

Phishing is a comparable activity. Some phishing involves luring folks into giving out credit card numbers, passwords, and Social Security numbers. This swindle occurs when thieves masquerade as members of reputable organizations, companies, or institutions. By pretending to be a concerned representative of JPMorgan Chase, Wells Fargo, Bank of America, or a similar company, phishers can provide email recipients with compelling reasons to unhesitatingly share financial details that they should be keeping under lock and key. Once the 2018 Winter Olympics started, phishing scams adapted and contrived scenarios to take advantage of the games.

Another style of phishing involves thieves posing as vulnerable people in tough binds who need immediate financial assistance but can offer absurdly generous rewards down the road. One of the most popular versions of this confidence racket is the Nigerian scam, where fictional members of fabricated wealthy families falsely promise riches in exchange for a dupe paying minor legal fees to release a fortune.

Although the template for this flimflam dates back more than 200 years to the Spanish Prisoner con, it’s still going strong. Recently, the underlying emotions it messes with, like greed, were exploited by people impersonating President Trump and Elon Musk on Twitter to run a bitcoin hustle.

When these swindles work—or related ones, like bots on social media disguised as people, with fake names, bios, identifying photos, and clever human-sounding scripts—it’s because victims take what they see and read at face value. They don’t critically interrogate how the information they’re presented with online is mediated and how it might differ from what they’d be confronted with in similar face-to-face situations.

In person, it’s much harder to disguise your appearance and voice. This is why so many people are surprised when they meet an online date for the first time and the person actually looks likes their profile pic. To look younger, people upload old photos. To look more attractive, people play with filters, editing, and camera angles.

Fake-out bot is frightening because it can easily deceive our senses and trick us face-to-face (so to speak). Hollywood makeup artists and plastic surgeons can do wonders, but it would be laughable if the average white guy tried to look or sound like a Nigerian prince. Technologists, however, won’t have a hard time installing hidden cameras on fake-out bot that keep up the illusion that what you see isn’t what you get.


Robots Can Have Superhuman Abilities

Furthermore, while some people have better vision, memories, and IQs than others, the ranges pale in comparison to the gaps that can separate robots from each other and us. Robots can be designed in so many different ways and with such varying capabilities that we can’t make educated guesses about what they can do just by looking at them.

Infrared vision? That’s possible. Superhuman lip-reading ability? Check. The only way for seeing to be justified as believing is if we can trust a robot because it’s a well-known model with clearly defined specs that haven’t been tampered with.

The closest parallel to the troubling gestures of fake-out bot are misleading avatars. As Judith Donath, an adviser at Harvard’s Berkman Klein Center, noted years ago when the virtual world Second Life was popular, online versions of body language can be misleading, because they aren’t governed by the same restrictions that constrain socialized human bodies.

An avatar avoiding eye contact doesn’t have to be distracted or shy. An avatar holding direct eye contact doesn’t have to be telling the truth. And a smiling avatar doesn’t have to be friendly. Of course, humans can perform these tricks as well. Specially trained people, like FBI agents, are experts at effectively responding to verbal and physical cues during an interrogation.

Maybe that’s what lies ahead, unless we do a good job of regulating innovation: sneaky, master-manipulator bots that are unencumbered by recognizable tells.

As Kaminski and her co-authors argue, the existing privacy principles known as the Fair Information Practice Principles (FIPPS) provide the foundation for countering this type of deception. But they need to be updated in the robot era to create accountability for exploiting innate and deeply habituated human reactions.

Whether through industry design standards, regulatory direction, or consumer-driven demand, the fake-out bots among us must be identifiable.

We judge other humans every minute of the day. We must be able to judge bots, too.