Co-authored with Woodrow Hartzog
Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects.
That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.
Society isn’t used to viewing facial recognition technology this way. Instead, we’ve been led to believe that advances in facial recognition technology will improve everything from law enforcement to the economy, education, cybersecurity, health care, and our personal lives. Unfortunately, we’ve been led astray.
After an outcry from employees and advocates, Google recently announced it will not renew a controversial project with the Pentagon called Project Maven. It also released a set of principles that will govern how it develops artificial intelligence. Some principles focus on widely shared ideals, like avoiding bias and incorporating privacy by design principles. Others are more dramatic, such as staying away from A.I. that can be weaponized and steering clear of surveillance technologies that are out of sync with internationally shared norms.
Admittedly, Google’s principles are vague. How the rules get applied will determine if they’re window dressing or the real deal. But if we take Google’s commitment at face value, it’s an important gesture. The company could have said that the proper way to get the government to use drones responsibly is to ensure that the right laws cover controversial situations like targeted drone strikes. After all, there’s nothing illegal about tech companies working on drone technology for the government.
Indeed, companies and policymakers often seek refuge in legal compliance procedures, embracing comforting half-measures like restrictions on certain kinds of uses of technology, requirements for consent to deploy technologies in certain contexts, and vague pinkie-swears in vendor contracts to not act illegally or harm others. For some problems raised by digital and surveillance technologies, this might be enough, and certainly it’s unwise to choke off the potential of technologies that might change our lives for the better. A litany of technologies, from the automobile to the database to the internet itself, has contributed immensely to human welfare. Such technologies are worth preserving with rules that mitigate harm but accept reasonable levels of risk.
Facial recognition systems are not among these technologies. They can’t exist with benefits flowing and harms adequately curbed. That’s because the most-touted benefits of facial recognition would require implementing oppressive, ubiquitous surveillance systems and the kind of loose and dangerous data practices that civil rights and privacy rules aim to prevent. Consent rules, procedural requirements, and boilerplate contracts are no match for that kind of formidable infrastructure and irresistible incentives for exploitation.
The weak procedural path proposed by industry and government will only ensure facial recognition’s ongoing creep into ever more aspects of everyday life. It will place an even greater burden on people to protect themselves, and it will require accountability resources from companies that they don’t have. It’s not worth it, no matter what carrots are dangled. To stop the spread of this uniquely dangerous technology, we’ll need more than processing rules and consent requirements. In the long run, we’re going to need a complete ban.
Can Amazon Be an Agent of Social Change?
The American Civil Liberties Union (ACLU) and other coalition partners this week wrote to Jeff Bezos demanding that Amazon stop providing government agencies with facial recognition technology and services associated with Rekognition, the company’s facial recognition system. The system poses a major threat to civil liberties because it “can identify, track, and analyze people in real time and recognize up to 100 people in a single image,” scanning data against a set of tens of millions of faces.
The letter is succinct and contains a single action item:
We demand that Amazon stop powering a government surveillance infrastructure that poses a grave threat to customers and communities across the country. Amazon should not be in the business of providing surveillance systems like Rekognition to the government.
It’s important to note that the demand has a product-specific component: It’s a plea for Amazon to stop servicing the government with Rekognition. But it’s also a general request that equally applies to other endeavors. Amazon should entirely get out of and never return to the government surveillance infrastructure business.
Two claims are made to justify this demand. The first is that Amazon’s efforts have the potential to help the government monitor the most vulnerable populations:
Amazon…encourages the use of Rekognition to monitor “people of interest,” raising the possibility that those labeled suspicious by governments — such as undocumented immigrants or Black activists — will be targeted for Rekognition surveillance.
The second is that Amazon’s efforts have the potential to help the police destroy our freedom to be in public without the government monitoring all of our activities:
Amazon has even advertised Rekognition for use with officer body cameras, which would fully transform those devices into mobile surveillance cameras aimed at the public.
In sum, there is a general problem (no anonymity in public) with unevenly distributed consequence (more threatening to minority and other vulnerable groups). Facial recognition enables surveillance that is oppressive in its own right; it’s also the key to perpetuating other harms, civil rights violations, and dubious practices. These include rampant, nontransparent, targeted drone strikes; overreaching social credit systems that exercise power through blacklisting; and relentless enforcement of even the most trivial of laws, like jaywalking and failing to properly sort your garbage cans.
Amazon rejected the appeal. Matt Wood, Amazon’s general manager of artificial intelligence, compared facial recognition technology to the internet. He noted that there are both positive and negative uses of facial recognition technology, and he argued that the threat of bad actors doesn’t outweigh all the good that responsible use of facial recognition technology can yield, like “preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families.”
In short, Amazon’s response suggests that if problems ever do arise, appropriate policy correctives can and should be followed. From this perspective, Amazon is acting as if the law is up to the tasks ahead and the company has an ethical obligation to stay the course.
Amazon’s position overlaps with views espoused by some leading privacy researchers and advocates. The standard privacy debates don’t usually consider bans. They revolve mostly around the question of how to create the best policies for preventing facial recognition technology from being abused while making the most of its potential. Stakeholders quibble over things like when law enforcement will be allowed to create name-face databases and when they can retain or delete the information within, the level of protection for these databases, who law enforcement can share information with, citizen access and correction rights, and policies for eliminating bias in the system. For lots of general-purpose technologies, these debates are sound and wise. But facial recognition is different.
Facial Recognition Technology Creep
Facial recognition technology is not like a general-purpose computer. It’s a specific tool that enables tracking based on our most public-facing and innate biological feature. It’s an ideal tool for oppressive surveillance. It poses such a severe threat in the hands of law enforcement that the problem cannot be contained by imposing procedural safeguards around how faceprint databases and face recognition systems are constructed and used.
Laws adequately locking down facial recognition technology, especially in today’s political climate, seem unlikely. The framing of facial recognition as critical for stopping criminals and finding missing persons will likely result in rules that yield too many concessions and leave open too many loopholes. But the best course of action for industry would be to quit cold turkey. As Frank Pasquale has argued about certain unsalvageable surveillance and data technologies, “Sometimes the best move in a game is not to play.”
As we see it, our procedural pessimism is rooted in a defensible notion of facial recognition technology creep. Facial recognition creep is the idea that once the infrastructure for facial recognition technology grows to a certain point, with advances in machine learning and A.I. leading the way, its use will become so normalized that a new common sense will be formed. People will expect facial recognition technology to be the go-to tool for solving more and more problems, and they’ll have a hard time seeing alternatives as anything but old-fashioned and outdated. This is how “techno-social engineering creep” works — an idea that one of us discussed in detail in Re-Engineering Humanity with Brett Frischmann.
To appreciate why facial recognition technology creep is a legitimate way to identify slopes that are genuinely slippery, you have to be a realist and accept the fact that some, though not all, technological trajectories can be exceedingly difficult to change. This is especially so in the case of trajectories formed by infrastructure that grows significantly, where the growth is propelled by strong interest across sectors; heavy financial investments; heightened expectations from consumers, citizens, and politicians; increased social, personal, regulatory, and economic dependency; and limited legal speed bumps that stand in the way.
Unfortunately, facial recognition technology is a potent cocktail that’s made from all these ingredients. It’s the Long Island iced tea of technology.
Our face is one of our most important markers of identity, and losing control of that is perhaps the greatest threat to our obscurity. We often recognize others by their faces, even as people age. Faces are also the easiest biometric for law enforcement to obtain, because they can be unobtrusively, inexpensively, and instantly scanned and tend to be hard to hide without taking drastic or conspicuous steps.
While there’s talk of “fighting A.I. surveillance with scarves and face paint,” attempts to disguise your face are doomed to be temporary countermeasures that institutions with deep pockets will find ways of neutralizing. Furthermore, it’s unfair for people to bear the burden of protecting themselves from surveillance.
Being attuned to the profound weight of infrastructure surrounding us isn’t the same thing as being a technological determinist, which is the idea that technology completely dictates people’s actions. In short, technological determinism means that technology has been and will continue to be used to increase human well-being by minimizing transaction costs wherever the costs can be cut.
The main difference between what we see as being a realist about the power of infrastructure and being a technological determinist comes down to different takes on alternative pathways. It might sound like a contradiction in terms, but the realist can believe in the transformative potential of ideals. Ideals like civil rights that should matter more than worshipping at the altar of efficiency. These ideals are damned hard to champion for, but they’re not preordained to fail. Such ideals place moral progress ahead of the technological variety, and they take courage — not mere will — to create and preserve.
The Road Ahead
There is a movement afoot that rejects industry collaborations with governments when unreasonably dangerous technologies are involved. Nearly 70 civil rights organizations signed the letter the ACLU delivered to Amazon. Eventually, more than 150,000 people signed the petition as well. In addition to the Google employees opposing Project Maven, some Microsoft employees are calling for the company to stop selling to U.S. Immigration and Customs Enforcement.
Because of its size and power, Amazon is a leader on the world stage. Good leadership requires accepting the responsibility to set a good example, even in situations that involve pushing back against trends and received wisdom. Indeed, that’s exactly what courage can require. It has been several years since Google committed to not building a facial recognition product, and it has yet to do so.
By being courageous and voluntarily acting in an ethically conscientious way consistent with shareholder values, Amazon can send a strong message about why facial recognition technology is dangerous and why companies shouldn’t be complicit with the government’s agenda of expanding its surveillance infrastructure. Contained courage isn’t enough, but it’s a start and it can spread. It can be the spark that ignites a much-needed larger agenda, one that we hope revolves around rejecting all facial recognition systems.