The Effects of Technology on the Criminal Justice System

Cindy Kei
Cindy Kei
Feb 12 · 4 min read
Credit: Forbes.com

By now, the fact that technology is an inescapable part of almost every aspect of our daily lives is something that we all have come to terms with. Be it positive or negative, technology plays a bigger and bigger role in the world and in all of our lives every single day. Everywhere we turn, there is some bigger, better, faster app or product being developed with the aim of making our lives easier and more efficient.

One of the topmost reasons why I am interested in the field of software engineering is because of the potential of technology to affect real change and make a difference in so many lives around the world. I love the idea of using technology as a powerful tool to, among other things, tackle issues of social and economic justice, to aid in ensuring the protection of human rights, to combat global warming, to promote diversity and inclusion, and to end the stigma surrounding mental health issues.

One of the more controversial uses of technology is the application of algorithms and facial recognition as it relates to the criminal justice system and legal policy. As reported by Cade Metz and Adam Satariano in their New York Times article “An Algorithm That Grants Freedom, or Takes It Away”, algorithms play an increasing role in determining the fates of real people in the criminal justice system.

Credit: Laurent Hrybyk

The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals. (Metz and Satariano)

At its core, these predictive algorithms work by gathering information from individuals and analyzing the data by comparing them to statistics — such as age, sex, and prior and current convictions — of known offenders.

There are few, if any, state or federal laws that mandate the disclosure of how these algorithms work and what factors they take in to account. In an area that is already notorious for being flawed and biased, there is understandably much anger and concern surrounding the “growing dependence on automated systems that are taking humans and transparency out of the process” (Metz and Satariano). One of the most pressing concerns that opponents of predictive algorithmic systems in the use of governing how justice is served have is that the creators of these algorithms might have personal biases along racial, class, and geographical lines that then are “being baked into these systems” (Metz and Satariano).

According to the Electronic Privacy Information Center, “a nonprofit dedicated to digital rights”, almost every state in America now relies on this “new sort of governance algorithm” (Metz and Satariano). Overseas in Europe, Algorithm Watch, a “watchdog in Berlin”, has determined that at least 16 European countries currently use similar technology (Metz and Satariano).

Credit: The Nation

Technology has also been increasingly used for the purpose of surveillance. There are computer programs used to identify children who are at risk of child abuse and software that alert government officials to suspected cases of medical insurance (in the US) or housing benefit fraud (in the UK). As reported in Business Insider, Clearview AI, a tech startup, has created facial recognition software with a database of three billion images:

The point of the tool is to match unknown faces with publicly available photos, thus identifying crime suspects. But…Clearview AI, has faced major criticism for the way it obtains images: By taking them without permission from major services like Facebook, Twitter, and YouTube. (Gilbert)

Clearview AI has received a lot of kickback from the public and from social media giants, but law enforcement agencies both in the US and in Canada have started to embrace the use of facial recognition technology to identify child victims of sexual abuse (Dance and Hill).

Investigators say Clearview’s tools allow them to learn the names or locations of minors in exploitative videos and photos who otherwise might not have been identified. In one case in Indiana, detectives ran images of 21 victims of the same offender through Clearview’s app and received 14 IDs, according to Charles Cohen, a retired chief of the state police. The youngest was 13. (Dance and Hill)

Technology such as facial recognition software and predictive algorithms might have many uses and benefits, but they do not come without risks and the potential for rights violations. If we want to take advantage of these technologies and simultaneously protect individual human rights, there must be clear and fair laws created to govern their uses and ensure their unbiased application.

Works Cited:

Dance, Gabriel J.X. and Hill, Kashmir. “Clearview’s Facial Recognition App Is Identifying Child Victims of Abuse.” New York Times. February 7, 2020.

Gilbert, Ben. “The controversial facial recognition tech from Clearview AI is also being used to identify child victims of sexual abuse.” Business Insider. February 8, 2020.

Metz, Cade and Satariano, Adam. “An Algorithm That Grants Freedom, or Takes It Away.” New York Times. February 6, 2020.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade