Our Government is Using an Algorithm to Unfairly Jail People

Anna
Art of the Argument
7 min readApr 14, 2022
Image From boldbusiness.com

In 2016, two people were locked away for theft: one Black woman, Brisha Borden, and one white man, Vernon Prater. Their fates were determined by an algorithm called the Correctional Offender Management Profiling for Alternative Sanctions, also known as COMPAS. The algorithm gives a risk assessment, a number from one (least likely) to ten (most likely) on whether or not the criminals will reoffend. The Black woman got rated an eight despite the fact that this was her first major crime as she only had misdemeanors previously. The white man got rated a three, even though he had several felonies including armed robberies on his record (Angwin et al.).

Not surprisingly, Borden has not committed a crime since her first offense while Prater has committed several. Despite these clear inaccuracies, algorithms like COMPAS are used all of the time by local and state governments, such as New York and Chicago, as their rulings are assumed to be more accurate and more consistent than human rulings (Chohlas-Wood). However, these assumptions are not necessarily correct as the accuracy of COMPAS’s ruling, 65%, is similar to that of human’s rulings, 63%, and COMPAS is known to rate Black people twice as high as white people (Yong; Angwin et al.). Still, these algorithms are being used by several local and state courts to make decisions like how high the bond amount should be(​Angwin et al.). Artificial intelligence is often not perfect and can be seeded with bias, especially if it is trained on biased data. Although not ideal, this is fine when we are looking up videos on YouTube or scrolling through Instagram, but should an imperfect algorithm be used to make serious predictions?

Police Report Showing Robert Julian-Borchak Williams As The Lead

In January 2019, Robert Julian-Borchak Williams, a Black man, was detained for a crime that he did not do, even though there was video evidence that he did not do it. A facial algorithm was used when trying to determine who was the culprit. It came to the conclusion that Mr. Williams might be the same man in the video. In part, because of the conclusion, the Detroit police arrested Mr. Williams. Facial recognition softwares, which often use AI algorithms, are often terrible at recognizing people of color as they are often not as represented in the facial recognition software’s training dataset (Hill). Even when training data sets are evenly broken down between racial groups, facial recognition software is still slightly less accurate for Black people than white people (Wang et al.). Even though the police knew that facial recognition software is often incorrect when identifying Black people and were reminded with the bolded words on the front of the facial recognition file “This document is not a positive identification” (Hill), they continued to use facial recognition as one of the main reasons why Mr. Williams was arrested. This was a situation with both a faulty facial recognition algorithm and a bad investigation by the police. But it makes us wonder, can the police truly be trusted to do their due diligence if shortcuts like AI are available to them?

In 2022, Italy sued Clearview– an AI company that works with various countries, including China and Ukraine, in surveillance (“INTRODUCING CLEARVIEW AI 2.0”)– for 20 million euros and asked the company to erase its data of its citizens (“Facial recognition: Italian SA fines Clearview AI eur 20 million-Bans use of biometric data and monitoring of Italian data subjects”). Clearview scours the internet and trains on various billions of images of people. This makes it able to recognize a wide variety of people. The company is even working on recognizing the same person at various ages (Dave and Dastin). In the war in Ukraine, Clearview has even helped uncover Russian soldiers, identified the dead, and helped combat assailants (Reuters). While this might seem amazingly beneficial, there is a dark side to this company. It is using images and other general information, such as tracking location and biometric data, without the subject’s consent (“Facial recognition: Italian SA fines Clearview AI eur 20 million Bans use of biometric data and monitoring of Italian data subjects”). This has caused the company to be sued by several countries, including Italy, Australia, and Great Britain for millions of dollars (Mac). Some of these countries, such as Italy, have even asked Clearview to stop collecting data of its citizens and to destroy the data that it has already collected as it obstructs the privacy of its citizens. This year, Clearview aims to work with the United States government on similar initiatives (Dave and Dastin).

Ayanna Pressley worked with other representative and senators to propose legislation that would prevent facial recognition software from being used by the government.

In 2022, we are in a pivotal moment where we can help determine whether or not AI can be used by police, courts, and in other sections of the government. I implore you to look deep into your soul and ask yourself, do you want faulty AI and sketchy companies to help make important government decisions? Yes, technology advances, and we are getting better and better at analyzing people, but do we really want to use AI algorithms that are not completely understood, that no one really knows the pitfalls of? Senators Edward J. Markey and Jeffrey A. Merkley and House of Representatives Ayanna Pressley and Pramila Jayapal proposed legislation that would prevent facial recognition software from being used by the government and would allow individuals to sue states or counties that are surveilling them with biometric data (“Facial Recognition and 5 Biometric Technology Moratorium Act of 2020”). In their denouncement of Clearview, they have cited that government surveillance makes civilians less likely to speak their mind and that minority communities are adversely impacted by governmental usage of facial recognition software. While it will probably mean that government processes will not be made as quickly, this process ensures that the government cannot survey us though AI and that biased AI systems and sketchy companies will not be used by the US government (Jayapal et al.). If you value your privacy and believe that the innocent should not be unfairly jailed, call your state senators and let them know that you will not stand for AI to be used by our policemen, by our judges, or by anyone in the criminal justice system.

Works Cited

(Original Signature of Member) Congress … — Ayanna Pressley. https://pressley.house.gov/sites/pressley.house.gov/files/Ending%20Qualified%20Immunity%20Act%20of%202021_0.pdf.

Allyn, Bobby. “‘The Computer Got It Wrong’: How Facial Recognition Led to False Arrest of Black Man.” NPR, NPR, 24 June 2020, https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig.

Castro, Clinton. “What’s Wrong with Machine Bias.” Ergo, an Open Access Journal of Philosophy, Michigan Publishing, University of Michigan Library, 11 July 2019, https://quod.lib.umich.edu/e/ergo/12405314.0006.015/--what-s-wrong-with-machine-bias?rgn=main%3Bview#:~:text=Prater%20has%20since%20been%20charged,state%20prison%20for%20these%20crimes.

Chohlas-Wood, Alex. “Understanding Risk Assessment Instruments in Criminal Justice.” Brookings, Brookings, 9 Mar. 2022, https://www.brookings.edu/research/understanding-risk-assessment-instruments-in-criminal-justice/#:~:text=One%20class%20of%20algorithmic%20tools,an%20individual%20before%20their%20trial.

February 9, 2022 the Honorable Alejandro N … — Markey.senate.gov. https://www.markey.senate.gov/imo/media/doc/letters_-_federal_gov_use_of_clearview_ai.pdf.

Hill, Kashmir. “Wrongfully Accused by an Algorithm (Published 2020).” The New York Times, The New York Times, 3 Aug. 2020, http://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.amp.html.

Rabang, Imelda. “The Bold Impact Of Technology And Artificial Intelligence In Law Enforcement” 12 December 201820 March, et al. “The Bold Impact of Technology and Artificial Intelligence in Law Enforcement.” Bold Business, 20 Mar. 2020, https://www.boldbusiness.com/digital/the-bold-impact-of-artificial-intelligence-in-law-enforcement/.

Julia Angwin, Jeff Larson. “Machine Bias.” ProPublica, ProPublica, 23 May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Locke, Kaitlyn. “Rep. Ayanna Pressley Calls for Focus to Remain on Peaceful Activism, Passing Legislation That Condemns Police Brutality.” News, GBH, 1 June 2020, https://www.wgbh.org/news/local-news/2020/06/01/rep-ayanna-pressley-calls-for-focus-to-remain-on-peaceful-activism-passing-legislation-that-condemns-police-brutality.

Mac, Ryan. “Clearview AI, a Facial Recognition Company, Is Fined for Breach of Britain’s Privacy Laws.” The New York Times, The New York Times, 29 Nov. 2021, https://www.nytimes.com/2021/11/29/technology/clearview-ai-uk-privacy-fine.html.

“Open States: Discover Politics in Your State.” Open States: Discover Politics in Your State, https://openstates.org/.

Person, and Jeffrey Dastin Paresh Dave. “Exclusive Facial Recognition Company Clearview AI Seeks First Big Deals, Discloses Research Chief.” Reuters, Thomson Reuters, 22 Feb. 2022, https://www.reuters.com/technology/exclusive-facial-recognition-company-clearview-ai-seeks-first-big-deals-2022-02-22/.

“Pressley, Lawmakers Introduce Legislation to Ban Government Use of Facial Recognition, Other Biometric Technology.” Representative Ayanna Pressley, 15 June 2021, https://pressley.house.gov/media/press-releases/pressley-lawmakers-introduce-legislation-ban-government-use-facial-recognition.

Reuters. “Ukraine Has Started Using Clearview Ai’s Facial Recognition during War.” CNBC, CNBC, 13 Mar. 2022, https://www.cnbc.com/2022/03/13/ukraine-has-started-using-clearview-ais-facial-recognition-during-war.html#:~:text=Ukraine's%20defense%20ministry%20on%20Saturday,misinformation%20and%20identify%20the%20dead.

“Riconoscimento Facciale: Il Garante Privacy Sanziona Clearview per 20 Milioni Di Euro. Vietato L’uso Dei Dati Biometrici e Il Monitoraggio Degli Italiani.” Riconoscimento Facciale: Il Garante Privacy Sanziona Clearview per 20… — Garante Privacy, 3 Sept. 2022, https://www.gpdp.it/home/docweb/-/docweb-display/docweb/9751323#english_version.

SENATORS MARKEY & MERKLEY AND REPS. JAYAPAL & PRESSLEY URGE FEDERAL AGENCIES TO END USE OF CLEARVIEW AI FACIAL RECOGNITION TECHNOLOGY. Ed Markey, 9 Feb. 2022, https://www.markey.senate.gov/news/press-releases/senators-markey-and-merkley-and-reps-jayapal_pressley-urge-federal-agencies-to-end-use-of-clearview-ai-facial-recognition-technology.

“The World’s Largest Facial Network.” Clearview AI, https://www.clearview.ai/.

Wang, M., Deng, W., Hu, J., Tao, X., & Huang, Y. (2019). Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2019.00078 https://openaccess.thecvf.com/content_ICCV_2019/papers/Wang_Racial_Faces_in_the_Wild_Reducing_Racial_Bias_by_Information_ICCV_2019_paper.pdf

Yong, Ed. “A Popular Algorithm Is No Better at Predicting Crimes than Random People.” The Atlantic, Atlantic Media Company, 29 Jan. 2018, https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/.

--

--