Listen to this story
Back in 1995, when Nicholas Negroponte wrote Being Digital, his seminal piece on technologically-driven life, facial recognition technology was still a bit of a dream. He felt it was destined to become reality.
“Your face is, in effect, your display device, and your computer should be able to read it, which requires the recognition of your face and its unique expressions,” he wrote. “The technical challenge of recognizing faces and facial expressions is formidable. Nevertheless, its realization is eminently achievable in some contexts. In applications that involve you and your computer, it only needs to know if it is you, as opposed to anybody else on the planet.”
In 2018, this reads as prophetic. Just last year, Apple debuted the iPhone X, which users can choose to unlock by looking at — and thus being recognized by — its camera. Your face is already a password.
Of course, by now we also know a lot about facial recognition’s other use cases. In recent years, facial recognition has been introduced at airports around the United States as a way to confirm travelers’ identities. It was used last month to identify a man who murdered five people at a newspaper office in Maryland, and this month to pinpoint two people suspected of poisoning Russian double-agent Sergei Skripal and his daughter Yulia in the UK. Tech companies are working to create facial recognition software that can, among other things, help a blind person know who’s in a photograph, or even who’s in the room with them. Credit card companies are hoping facial recognition is the next step in payment authentication.
We’ve also seen how facial recognition technology can be misused.
“Imagine a government tracking you everywhere you walked over the past month without your permission or knowledge,” Brad Smith, president and chief legal officer of Microsoft, wrote recently in a public call for government regulation of facial recognition technology. “Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech.” Or don’t imagine at all, and instead just look to China, where in some cities “cameras scan train stations for China’s most wanted,” as the New York Times reports, and “billboard-size displays show the faces of jaywalkers and list the names of people who don’t pay their debts.” Facial recognition is also being used to “track members of the Uighur Muslim minority and map their relations with friends and family.”
Your face is already a password.
Obviously, the kind of questions Microsoft’s Brad Smith is prompting (What limits should be put on this technology? Who should control it?) are vital in coming to terms with how facial recognition impacts the world around us. But there’s a question Smith doesn’t address that Negroponte did, over two decades ago: What does it mean for our computers to “know” it is us when they “see” us?
In other words, what does facial recognition actually do to our face?
Last summer, French artist Raphaël Fabre claimed to have successfully applied for a national ID card in France using an entirely computer-generated image of his face. As Vice reported, Fabre “created the portrait using programs and techniques utilized for special effects in movies and games, such as Blender and TurboSquid, which is a marketplace for 3D objects. He digitally sculpted a human head from what was essentially a cube before retouching the image in 2D.” On close inspection, the image Fabre created is obviously not a photograph — but it is startlingly close to one.
Fabre told Vice that his goal was to explore our relationship to the image, “the limits of the human eye, or its poetic interpretation, and the power of fiction and technology. We are so surrounded by modified, digitised image [sic] of bodies, and basically images of everything, that our world becomes a digital image in a way.” And, as he showed — but neglected to mention — it’s not just our world that has become a digital image. It’s our bodies, our selves.
What does a facial recognition camera “see” when it spots our face amongst all the others? How does it know who’s looking at it — or who it’s looking at? The answer is that it essentially does to each of us what Fabre did to himself. It considers our face, and by extension us, as pure data. Under the gaze of a facial recognition camera, we are not ourselves; we are digital renderings of ourselves.
What does all this mean for our face?
In 2012, David Lyon, Director of the Surveillance Institute at Queen’s University, wrote that in a modern surveillance context, “the information that proxies for the person is made up of ‘personal data’ only in the sense that it originated with a person’s body and may affect their life chances and choices.” Our logins, browser history, Facebook friends, Twitter posts, or record of travel, are details of our personal life — but they are not, even when pieced together, the whole story. And yet, as Lyon wrote, “the piecemeal data double tends to be trusted more than the person, who prefers to tell their own tale.” Like it or not, in the world we are building, you are the rendering; you are the data.
Like it or not, in the world we are building, you are the rendering; you are the data.
Faces have always been used as a tool of identification. Law enforcement and government have been known to use faces to systematically classify some people apart from others. Yet even in those instances, simply having an image of a face was not the same as knowing everything about someone. But this is what facial recognition does: it enables anyone to assume that looking at someone’s face means knowing them. Surveillance structures are currently being built that would make this assumption by default. Your face was once assumed to hide your secrets. It will no longer be allowed to.
Still, what facial recognition actually recognizes — a data-based abstraction of our face — can only ever be a superficial portrayal. This is why we feel injustice and fear when we learn of China’s surveillance state, where the names of those guilty of petty crimes like jaywalking are displayed publicly. Because we don’t know the full story. We don’t know why they were jaywalking.
Ultimately, facial recognition robs us of detail. It renders our faces smooth and sterile, without depth or personality. The contours, and the mysteries they hide, are flattened. Our faces become detached from the nuances of the lives they represent. And because, unlike names, our faces are (for now) permanent features, we can never alter or hide them from anyone who may want to use them for good or ill. In short, our faces will morph from human tools — gateways to understanding and empathy — to the very things Negroponte envisioned: computational tools. In the world that facial recognition is leading us toward, your face will become strictly an interface. It will become a display.