Judith Donath
7 min readJul 23, 2018

Face recognition is a powerful technology that, depending on your point of view, promises or threatens to eradicate our ability to be anonymous in public. Proponents anticipate that it will unmask criminals, end human trafficking and make our world a far safer place. Critics fear that it will enable oppressive government surveillance, turn our everyday activities in to fodder for marketers, and chill free speech and expression.

The potential harms from face recognition technologies are dire enough that a panel of privacy and civil rights advocates have called for it to be banned for use in law enforcement. Law professor Woodrow Hartzog and ethics scholar Evan Selinger go further, describing it as “toxic” and calling for an outright ban on its use. While I believe that their fears about ubiquitous face recognition are well-founded, I have concerns about their proposed solution: not only is a ban on facial recognition technology very unlikely, but focusing on the specific technology is not the right approach.

The tremendous advantages face recognition provides to governments and businesses makes a ban on it unlikely. Tracking people as they go about their daily life allows marketers to learn their habits, hobbies, food preferences, and shopping patterns; it gives law enforcement a powerful tool for finding criminals; it provides governments with unprecedented power to monitor and control people. These powerful institutional lobbyists and law-makers have much to lose if face recognition is banned. At the same time, there is likely to be little grassroots demand for a ban on face recognition: for individuals, facial recognition provides a number of immediate convenience (the ease of shopping when you simply walk thru a store and just take what you like, no more registers necessary) and an enhanced feeling of safety (who will want to go back to the wild and dangerous days, when unmarked criminals walked the streets, when no watchful eye escorted you safely home?) ; its dangers, though in fact significant, seem abstract and distant. We have seen repeatedly that people are willing to sacrifice privacy when tempted by consumer benefits or the promise of protection, especially when induced to feel fearful and threatened; they brush aside privacy concerns with “why should I care if I’m doing nothing wrong?” Between the tremendous flow of valuable information to commercial and governmental agencies, and the relatively low demand for privacy protection by individuals, I think a ban on the use of face recognition technology — important as it may well be to maintain individual freedom and privacy — is unlikely.

That said, finding ways to protect freedom and privacy in the face of rapidly advancing surveillance and analysis technologies is important and urgent. But calling for a ban on face recognition technology is not the right approach — instead, we need to seek regulation based on the qualities and capabilities of the technology, not the technology itself. Ideally this approach would avoid an arms race of regulation-dodging technologies, and instead spur development of innovations that complied with the permitted scope of more limited surveillance.

Focusing on the specific technology distracts from the larger questions: what activities are we trying to ban and what rights and situations are we trying to protect? There are many ways for machines to identify people, from an identifying ping from one’s phone to visual gait analysis or other biometric markers (if dogs invented surveillance technologies, we’d have scent-based identification). Face recognition raises alarm because it is involuntary and immutable — but so are other biometrics. A privacy-centric taxonomy of identification approaches would be a useful starting point in exploring what we want to regulate.

Some initial categories for such a taxonomy include:

  • Is it part of the body (face, gait, pheromones) or an external object (id card, license plate)?
  • Is it permanent (face, fingerprints) or changeable (a business card; a swappable implant).
  • Can it be turned on and off? It may easy to turn off a broadcasting beacon — or nearly impossible. Faces can be obscured, but at significant social cost.
  • Can it be sensed secretly? You know if you are being fingerprinted or showing someone your traditional passport. You are unaware of being identified by face recognition via hidden camera or through an embedded IR beacon.
  • Is it embedded in other identifying information? If so, what? Faces also encode gender, age, race. DNA links you to the rest of your family.
  • How accurate is it?
  • Does it sense something that people cannot? Faces are visible, while few humans can track by smell and millimeter wave scanners see through clothes.

This list is far from exhaustive — but it is a start. The key point is that regulation should be directed by these categories.

Regulations based on such a taxonomy could be used to encourage the development of privacy-enhanced identifying technologies. For example, stores like to identify customers for a variety of reasons: to treat high spending ones better; to track movement for market research; to catch shoplifters. If stores were barred from using permanent, hard-to-obscure features for marketing purposes, they would need to find other technologies than face recognition for doing this, e.g. some kind of beacon you carry around the store with you, whether an object or a temporary phone app. They could still be allowed to use permanent, hard-to-obscure features for security, with the caveat that the recordings be discarded after a certain time period — and could not be used for any other purposes. By using these categories for regulation, we avoid the sort of whack-a-mole response where face recognition is banned, but then it is replaced with gait recognition or other equally problematic solution.

Do we have a right to be anonymous in public? This question — which is fundamental to any discussion about the ethics and legality of face recognition — is both controversial and pressing, as surveillance technology becomes more powerful, and security fears, both real and exaggerated, make the choice to be anonymous increasingly suspect. In the United States there are a variety of anti-mask laws (many enacted decades ago in response to the KKK’s masked terrorizing) as well as more recent adoption of extensive security surveillance; still, there is a significant history of connecting anonymity with free speech and the right to be anonymous in public has been upheld in a number of cases. Elsewhere, anonymity is more endangered. Several European countries and Canada have recently passed laws banning burqas; these laws are aimed against Islamic practice (a highly problematic intention) but are often framed in such a way as to ban all face-covering in public (with an occasional practical exception for winter scarves). Meanwhile, the power of automated identification technology to intimidate is vividly displayed in China, where a combination of face recognition, extensive digital dossiers and public shaming compel strict obedience to all laws and norms. Everywhere, increasingly ubiquitous cameras— governmental, commercial, and even personal— mean that if public anonymity is to survive, it will need protection. Existing laws need to be revisited, given the capabilities of new sensing and recording devices and of algorithmic analysis. The laws also need to be recast as permitting (or forbidding) the masking of identity in general, not just facial masks.

Invisibility is key factor in face recognition’s perniciousness — indeed of any surveillance system. We need to think about whether and how to make it clear to people how public or private the space they are in actually is. We navigate traditional spaces with an intuitive sense of what is public or private, and adjust our expectations, behavior, clothing, facial expression, etc. accordingly. We are attuned to different audiences, too, whether they are acquaintances or strangers, people we wish to impress or those we are oblivious to. One of the problems with surveillance technologies is that they are mostly covert: under surveillance, we are unaware of the watching audience; even if we think about being watched, we have only a vague sense of who or what is doing the watching — a distant person? A machine algorithm? A yet-to-be-specified future analyst?

Visual representation that make it clear to the people in a space that they are being watched, and in what way¹ and by whom gives them the ability to choose their behavior based on a more accurate understanding of how public their environment actually is. (I discuss this in greater detail in The Social Machine). Armed with this information people could choose whether they wanted to be in that space, and if so, to adjust their behavior and appearance to the watching audience, whether human or machine, present or future, simple or algorithmic.

Private vs public is not a case of right versus wrong. There are uses for unwatched spaces and for heavily surveilled ones. You might have one public square that is specifically designed to be an anonymous space where no one is tracked vs another where with full tracking is permitted. People might ask to have the cameras on when the street lights go on at night but to have their streets be untracked in the daytime. Indeed, surveillance/privacy should be elements in zoning: certain public spaces could be zoned as surveillance-free or security-only surveillance or opt-in or non-biometric surveillance zones.

This call to make surveillance visible comes with one big caveat. The chilling effect is one of the harms of surveillance — would making people more aware of surveillance simply exacerbate that harm? Or, would it would galvanize people to demand zones of privacy and anonymity? Surveillance is beneficial when it deters crime, but not when it chills speech and expression. This is another reason for separating for example the technologies that might be allowed specifically for crime prevention and solving vs. those used for marketing, analysis, etc. Today’s situation — in which we are covertly observed — saves us from the self-consciousness of he observed, but it is an illusory freedom, based on our unknowingly giving up troves of data about ourselves.

[1] Such depictions would, when possible, let people know not just what data was being recorded, but what information it was being tied to. 10 years ago, security cameras were common — but they were not technologies of individual identification. One approach would be to give people privately (e.g. via their phone) a depiction of what the space knows about them: this would eliminate the privacy issues that publicly displaying extensive info about all the people in that space would have.

Judith Donath

Given how profitable it can be to lie, how does honesty exist? Author of The Social Machine (MIT: June 2014) http://vivatropolis.com/