Op-Ed: The Fight to End Surveillance Technology

Advocacy @ UNA-NCA
UNA-NCA Snapshots
Published in
6 min readAug 17, 2021

By Marissa Zupancic, UNA-NCA Advocacy Fellow

The field of digital surveillance continues to grow as new technologies like biometric collection and facial recognition software become more readily available to law enforcement and government agencies. In a previous article, I detailed surveillance technology’s impact on immigrants, Biden’s proposed digital border wall, and the ways these technologies have biases against Black and Indigenous People of Color (BIPOC). Now, Senator Edward J. Markey (D-Mass) and multiple members of Congress support legislation that would ban the government from using facial recognition and other biometric collection technologies. The Facial Recognition and Biometric Technology Moratorium Act seeks to curtail unregulated use of facial recognition and other biometric collection technology by governments in the United States, especially due to this technology’s poor accuracy when examining Asian and African American faces. For example, three black men have been wrongfully arrested due to a false positive match from facial recognition software. This surveillance issue also stretches well beyond the United States. During 2020, Greece began to experiment with facial recognition and other surveillance measures on refugees to track and identify them. While this field of technology continues to grow, it has the potential to perpetuate inequities among immigrants, BIPOC, and other marginalized populations. It is essential to strike a balance between security and inequity — and listening to affected communities is the best way to identify pathways for reform.

Community organizations like Data for Black Lives (D4BL) work within their communities to abolish ICE and prevent local, state, and federal governments from using surveillance measures like facial recognition scanning and fingerprint collection. D4BL specifically works on democratizing data by making information about how COVID-19 is disproportionately affecting different races and ethnicities more available. While this is an ongoing effort, computer algorithms, like those for facial recognition, reflect another issue by “learning” biases that are present in society. These biases, even if unintentional, lead to data-driven harm. Amazon piloted an algorithm to sort through resumes, but the initial data fed to the algorithm had a majority of male resumes. The algorithm then learned to discriminate against women and filtered out women’s resumes. The algorithm “penalized resumes” that had the words “women” (like in women’s chess club) and people who attended all women’s colleges. For this reason, Amazon chose not to use this technology as part of its hiring process. This is why it is important and necessary to have BIPOC and diverse voices leading the design of algorithms and interpreting data from a different (and non-white) perspective.

One area of concern is that people may not consider data “in the proper historical context,” and fail to factor in systems like structural racism. For example, one mainstream assumption early in the COVID-19 pandemic claimed that hospitalizations of Black people were higher than white people due to “higher rates of chronic pre-existing conditions.” This data lacked the historical context necessary to understand how structural racism shapes unequal access to health care, with people of color less likely to have health insurance than their White counterparts. Organizations like D4BL work on a community organizing level to usher in this needed change, while federal legislation focuses on the dangers of surveillance tech nationally.

The Facial Recognition and Biometric Technology Moratorium Act aims to address the systemic racism which new surveillance technologies like facial recognition and biometric collection perpetuate. This legislation responds to data that government entities utilize “unregulated facial recognition technologies.” About 50% of adults living in the United States have their faces in these surveillance databases. This bill bans the use of other surveillance technologies like voice recognition. Congresswoman Pramila Jayapal (WA-07), a proponent of this legislation, emphasized the disproportionate effects surveillance technology has on the Black community and how this bill prioritizes racial justice. She explained that “facial recognition technology is not only invasive, inaccurate, and unregulated but it has also been unapologetically weaponized by law enforcement against Black people across this country.” Certain facial recognition programs correctly identified Black women’s faces 35% of the time, compared with 80% accuracy for White men. This legislation seeks to end the disproportionate impact facial recognition and biometric collection technologies have on BIPOC by federal authorities.

Cities and states across the United States are beginning to ban police departments from using facial recognition, including Virginia. Because facial recognition technology lacks federal regulation in the United States, individual tech companies have no obligation to prevent police forces from using their programs. Privacy advocates cheered when Amazon announced they will not allow police to use Amazon Rekognition, the company’s own surveillance technology system. However, Amazon could change their policy at any time due to the lack of federal regulations on this biased technology. While the United States is beginning to recognize the harm of surveillance tech with conversations at local and federal levels, other countries continue to experiment and fund new and invasive technology.

Over the course of the COVID-19 pandemic, Greece used different technology to surveil its refugee populations. Petra Molnar, the Associate Director of the Refugee Law Lab at York University, traveled to the Moria camp in Lesbos, Greece, last year. Described by some as a ‘living hell,’ the Moria refugee camp housed over 20,000 refugees before it burned down in September of 2020. Clinics operated at full capacity exclusively for emergencies and were only accessible to those with police papers (ausweis). Over the past year, Greece has been testing out different aerial surveillance measures, like drones, on refugees in camps. People must sign in and out when going to purchase items from the store, prompting questions of how the government uses, or plans to use, this data. Wi-Fi connectivity is also limited in Greece’s refugee camp, and Greek authorities provide the limited connection available. People living in these camps want to contact their loved ones using the Wi-Fi, but simultaneously fear what data the Greek government collects from them.

Countries across Europe continue to develop new surveillance technologies on refugees. One type of surveillance deployed across the pond is AI lie detection. Even though there is not a standardized “tell” to detect when a person is lying, this technology scans people’s faces to determine if they are lying, frequently resulting in false positives. This lie detection technology raised funds across the United Kingdom, Greece, and other European countries from 2016 to 2019. However, this project is currently being sued because the company failed to release papers about the project’s algorithms per Freedom of Information Act requests, so it is unclear when countries in Europe will have access to this technology. Overall, growing surveillance tech has far-reaching implications for the privacy of people in the United States and on other migrant populations abroad.

The fight to regulate and ban biased surveillance technology will not end anytime soon. While community organizers continue to gather and call for an end to facial recognition that discriminates against faces that are not white, the conversation at the federal level is only beginning. The U.S. government expanded its surveillance reach with drone surveillance by Customs and Border Patrol agents in 2020, and facial recognition companies continue to pull photos of people from social media and add their faces to their facial recognition database without their knowledge. Companies like Clearview AI continue to offer free trials of facial recognition programs to law enforcement officers, frequently without their own departments’ knowledge. Without some form of standardized, federal regulation, there is endless potential for the abuse of this surveillance technology. In order to fight against invasive surveillance tech, it is necessary to educate others, advocate for stronger regulations, and halt the federal use of all surveillance technologies. Centering the voices of BIPOC, those most affected by data misinterpretation and facial recognition, is also essential in shaping how to regulate this surveillance technology. It is important to educate local, state, and federal lawmakers on the urgency of this issue in order to spark widespread change and end the use of this discriminatory tech.

--

--