US Institutions Must Divest from Facial Recognition Research & Development
Lieutenant Commander Carl Governale, US Navy;
Jevan Hutson, University of Washington School of Law;
P. M. Krafft, PhD, Oxford Internet Institute;
Zulkayda Mamat, Massachusetts Institute of Technology
Authors’ Note: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any agency of the U.S. government.
On October 7th, 2019 the US Department of Commerce blacklisted 28 Chinese tech giants and government entities for their role in the “brutal suppression of ethnic minorities within China”. Among the organizations banned were industry-leading artificial intelligence and facial recognition companies. Facial recognition software, recently described as the “plutonium of artificial intelligence (AI)”, is a technology for automated categorization of people based on phenotype, or individual unique face prints in the case of Xinjiang, enabling police and government to trawl through collections of images to find and track targeted people. When combined with massive data infrastructures such as footage from surveillance cameras, photos posted on social media, or headshots from government-issued photo IDs, facial recognition threatens personal liberty and facilitates new forms of authoritarian control and oppression.
The media has often covered these emergent technologies and their integration into Chinese authoritarian surveillance systems: the Chinese social credit systems seemingly taken straight out of an episode of Black Mirror; protesters tearing down facial recognition cameras in Hong Kong; the technology-enabled oppression of the Uyghur people in China echoing memories of abominable concentration camp conditions.
What’s left out of these stories is the complicity if not the active role of US researchers and organizations in these cases. The tools of digital authoritarianism that enable China’s surveillance state are just as often Made-in-the-USA. Framing China as the dystopian center of authoritarian surveillance neglects the instrumental roles that the U.S. government, private industry, and academic research institutions have played in the development and deployment of modern biometric surveillance systems at home and abroad.
Facial recognition has roots in computer science research laboratories in the United States. The Intelligence Advanced Research Projects Activity (IARPA) and the US Department of Commerce’s National Institute of Standards and Technology (NIST) build datasets, hold competitions, and fund research programs. Complete with benchmark datasets of faces, US companies are also major players in this landscape. Amazon’s Rekognition tool is industry-standard. Beyond their own commercial product efforts, Intel Corporation, Google, and NVIDIA all funded the OpenFace project, one of the most prominent publicly accessible pieces of facial recognition software.
Alongside industry, universities also play key roles. Most top US computer science departments invest in computer vision, of which facial recognition is a key component. One illustrative example centers on the Massachusetts Institute of Technology (MIT). MIT is a leading research institution renowned for cutting-edge contributions to AI. It has also historically been one of the biggest recipients of US Military funding. Unfortunately, the US government isn’t the only entity looking to fund and benefit from MIT’s AI surveillance research. Chinese companies that provide the government “smart policing” capabilities in Xinjiang are also funding MIT’s AI research.
In 2018 MIT entered a research partnership with the Chinese AI giant and global facial-recognition leader SenseTime, one of the companies recently blacklisted by the US Department of Commerce for its role in the Uyghur ethnocide, making SenseTime a major funder of the MIT Quest for Artificial Intelligence. Prior to the blacklisting, SenseTime had already come under criticism earlier this year for its joint venture in Xinjiang with Leon Technology, a company that in one year into China’s Strike Hard Campaign Against Violent Terrorism was deriving 42 percent of its revenue from “security” business in Xinjiang targeting millions of ethnic minorities. The SenseTime funds at MIT are joined by significant donations from iFlyTek, another Chinese surveillance technology company blacklisted by the US Department of Commerce, and one that the Human Rights Watch also issued a 2017 expression of concern for its mass data collection in police surveillance in Xinjiang.
To make matters worse from the perspective of US policy, funds from these companies implicated in the Uyghur ethnocide have been pooled with funds from US military agencies such as the Army Research Office and the Office of Naval Research in several cases of specific projects. Following the October blacklisting, MIT was reviewing its relationship with SenseTime, but no subsequent actions have since been reported.
MIT is far from the only example of US institutions with ties to the Uyghur ethnocide, and pooled funding with US government agencies. Another example from a highly ranked computer science department at the University of Illinois at Urbana-Champaign reveals pooled funding between a major US intelligence funder and CloudWalk, a Chinese tech startup that is working to automate the detection of Uyghurs and Tibetans based on phenotypes.
These examples illustrate that US institutions have been complicit in the development and export of powerful surveillance tools used to oppress the Uyghur people, which should garner our collective horror and outrage. But this outrage is hollow if we are not simultaneously interrogating the deployment of the same technologies within the US. Many of the same institutions are culpable for the organized, systematic oppression that facial recognition and other information technologies enable inside and upon our own borders.
The Department of Homeland Security (DHS), Immigration and Customs Enforcement (ICE), Palantir, and Amazon have partnered to develop technology to supercharge mass deportation. Our own government agencies involved in surveilling, arresting, and deporting immigrants organize mass personal information such as license plates and collect biometric information such as fingerprints, iris scans, and face-prints for the purpose of identifying, cataloging, and surveilling immigrants at unprecedented speed and scale.
These systems use automation to accelerate the speed and scale of racialized police work, which has the effect of amplifying the power of those who benefit from such systems while disempowering marginalized groups. Oversteps and abuses by government agencies around the country — from domestic surveillance documented by Snowden to the militarization of local police forces — have led civil rights advocates and policymakers to pass strict local surveillance regulations in Washington, California, Massachusetts, and Tennessee, as well as national measures, recommendations, and multiple campaigns.
The blacklisted Chinese companies deploying facial recognition, if they were based in the US, would be considered major innovators even while they surveil immigrants, black activists, and other activists of color. Amazon CEO Jeff Bezos recently acknowledged facial recognition as “a classic dual-use technology”. If facial recognition was actually being regulated as a dual-use technology, i.e. one possessing military proliferation applications, then there would be sirens ringing and careers ending because of the way US military funding has been pooled with funding from these blacklisted Chinese companies.
Department of Commerce regulations of the technology itself are not enough, though. We must halt research, development, procurement, and deployment of facial recognition technologies because of their abhorrent uses and their disproportionate abuse by state actors for surveillance and subjugation. We must cultivate an environment where it is socially unpalatable to intellectually contribute to facial recognition projects and politically unpalatable for the US government to dedicate limited fiscal resources to these troubling technologies. US research funding agencies must divest from facial recognition research, and facial recognition technologies — even those developed by US companies — must be banned.