Pre-Trial Risk Assessment Tools and Bias: Trends and Resources

Reference Staff
walawlibrary
Published in
4 min readOct 23, 2020

**On June 4, 2020 the Washington Supreme Court issued an open letter to the judiciary and legal community recognizing deep-seated and continuing institutional racial injustice and calling for action to address systemic inequities. The Washington State Law Library is dedicated to furthering the Court’s goal by publishing stories that highlight the historical context surrounding systemic racism and efforts to dismantle it.**

The State of Kentucky has long been a leader in the movement to end the cash bail system, a system that on an average day results in hundreds of thousands of defendants awaiting trial in jail across the United States. By waiting in jail, these defendants risk losing jobs, custody of their children, and, recently, are at increased risk of contracting COVID-19. In 2013 Kentucky judges began using a pretrial risk assessment tool designed by the Arnold Foundation to evaluate whether defendants should be released on their own recognizance without bail, be released only with bail, or be held in jail prior to trial. The Marshall Project notes that the Arnold Public Safety Assessment (PSA) system is based on an algorithm that evaluates the combined outcomes of over 1.5 million criminal cases to determine the likely dangerousness or flight risk of a defendant. This analytical tool for judges is credited with decreasing New Jersey’s pretrial jail population by 29.3 percent after the PSA was adopted. Such risk assessment instruments (RAIs) promise to decrease pretrial detention enforced by a financially discriminatory system and replace it with an economically neutral system based on empirical data rather than just judicial insight.

As these artificial intelligence risk assessment tools get adopted by jurisdiction after jurisdiction, some are challenging these tools as having a racially discriminatory impact. The Brookings Institute published a series entitled AI & Bias that explores the risk of bias being unintentionally coded into algorithms. A recent article in this series, Understanding Risk Assessment Instruments in Criminal Justice, explains the impact of past discriminatory enforcement on the system:

One of the most concerning possible sources of bias can come from the historical outcomes that an RAI learns to predict. If these outcomes are the product of unfair practices, it is possible that any derivative model will learn to replicate them, rather than predict the true underlying risk for misconduct. For example, though race groups have been estimated to consume marijuana at roughly equal rates, Black Americans have historically been convicted for marijuana possession at higher rates. A model that learns to predict convictions for marijuana possession from these historical records would unfairly rate Black Americans as higher risk, even though true underlying rates of use are the same across race groups.

Research on the effect of reform in Kentucky by Megan Stevenson of the University of Virginia School of Law and Alex Albright, a Harvard economics Ph.D. candidate, concludes that the benefits of the PSA in Kentucky have been fleeting and that judges continue to take race into account due to implicit bias, requiring cash bail of Black defendants with a moderate PSA score more often than white defendants with the same score. In July 2018, a group of over 100 civil rights organizations including the NAACP and the ACLU released a letter opposing pretrial risk assessment tools because they “threaten to further intensify unwarranted discrepancies in the justice system and to provide a misleading and undeserved imprimatur of impartiality for an institution that desperately needs fundamental change.”

Others contend that the RAIs may yet be carefully used as part of the effort to end racial disparity in the criminal justice system. The authors of a study of 175,000 cases from the New York based Center for Court Innovation conclude:

[J]urisdictions must think “beyond the algorithm” — that is, decide what they want to use a risk assessment for and then work to put in place policies and practices in support of those aims. If the goals are reducing incarceration and promoting racial fairness, then a more targeted use of risk assessments could be of particular benefit to the group that currently experiences the worst pretrial outcomes: defendants of color.

More scrutiny of the empirical impact of both bail reform and artificial intelligence RAIs is coming as the nation examines both racial injustice within the system and the burdensome economic and human costs of maintaining the world’s highest prison population rate, over 2.1 million people or 665 per 100,000 people in 2018.

More on pre-trial risk assessment and other algorithmic tools:

2019 Washington State Supreme Court Symposium Artificial Intelligence: A Critical Review of its Use in Public Decision-Making

Muckrock’s project Algorithmic Control: Automated Decisionmaking in America’s Cities

Prison Policy Initiative’s Pretrial Detention Research Library

ProPublica’s story Machine Bias: Investigating Algorithmic Injustice

Pretrial Justice Center for Courts

Mapping Pretrial Injustice

Pretrial Justice Institute

U.S. Bureau of Justice Assistance Public Safety Risk Assessment Clearinghouse

Wired article Algorithms Were Supposed to Fix the Bail System. They Haven’t

The Safety and Justice Challenge publication The Present And Future Of AI In Pre-Trial Risk Assessment Instruments

Criminal Justice Policy Program report Bail Reform: A Guide for State and Local Policymakers

Yakima County, Washington Pretrial Justice System Improvements: Pre- and Post-Implementation Analysis

Whatcom County Incarceration Prevention and Reduction Task Force (RM)

--

--