Amazon Needs to Stop Providing Facial Recognition Tech for the Government

The benefits do not outweigh its privacy risks and dangers

Evan Selinger
8 min readJun 21, 2018


Photo by Randy Colas on Unsplash

Co-authored with Woodrow Hartzog

Imagine a technology that is potently, uniquely dangerous — something so inherently toxic that it deserves to be completely rejected, banned, and stigmatized. Something so pernicious that regulation cannot adequately protect citizens from its effects.

That technology is already here. It is facial recognition technology, and its dangers are so great that it must be rejected entirely.

Society isn’t used to viewing facial recognition technology this way. Instead, we’ve been led to believe that advances in facial recognition technology will improve everything from law enforcement to the economy, education, cybersecurity, health care, and our personal lives. Unfortunately, we’ve been led astray.

Procedural Pessimism

After an outcry from employees and advocates, Google recently announced it will not renew a controversial project with the Pentagon called Project Maven. It also released a set of principles that will govern how it develops artificial intelligence. Some principles focus on widely shared ideals, like avoiding bias and incorporating privacy by design principles. Others are more dramatic, such as staying away from A.I. that can be weaponized and steering clear of surveillance technologies that are out of sync with internationally shared norms.

Admittedly, Google’s principles are vague. How the rules get applied will determine if they’re window dressing or the real deal. But if we take Google’s commitment at face value, it’s an important gesture. The company could have said that the proper way to get the government to use drones responsibly is to ensure that the right laws cover controversial situations like targeted drone strikes. After all, there’s nothing illegal about tech companies working on drone technology for the government.

Indeed, companies and policymakers often seek refuge in legal compliance procedures, embracing comforting half-measures like restrictions on certain kinds of uses of technology, requirements for consent to deploy technologies in certain contexts, and…



Evan Selinger

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere.