Crime & Punishment: Sentencing in a Brave New World

CrowdJustice
CrowdJustice
Published in
7 min readMay 25, 2017

Across the globe, advances in big data and artificial intelligence seem to touch every part of modern life, while new shifts in domestic policies see numerous countries threatening to enact more intense blanket criminal sentencing (US Attorney General Jeff Sessions’ recent moves to revive the war on drugs is a key example). As these two seemingly distinct areas evolve, how will criminal punishment processes and standards change? Two recent developments in the law prompt a closer look at the issues of fair and effective punishment, especially with respect to individualized sentencing and judicial discretion.

Algorithmic sentencing

In March, the Wisconsin Supreme Court found in State v. Loomis found that a court’s use of an algorithmic risk model in sentencing was not a violation of due process even though the methodology used in the risk modeling was not disclosed to the trial court nor the defendant. The court justified this lack of disclosure by claiming the “COMPAS” model employed in the sentencing was proprietary to Northpointe Institute for Public Management and protected by trade secret. Furthermore, the Wisconsin Supreme Court did not find the use of gender in the risk model to be a violation of due process. The court did attempt to set some ground rules for the use of algorithmic risk modeling for use in sentencing decisions, requiring that judges must not rely solely on such risks models in their sentencing decisions and that presentencing investigation reports must contain five types of warnings when introducing an algorithmic risk assessment, including that the algorithm is proprietary.

Mandatory minimums

Jeff Sessions recently announced that he is ordering all federal prosecutors to seek charges carrying the maximum sentences in drug offenses, reversing an Obama-era policy to reduce use of mandatory minimums for nonviolent offenses and reserve harsher sentences for violent drug cases or organized crime. In his memo, Sessions reaffirmed the use of mandatory minimums in sentencing guidelines, upsetting Republicans and Democrats alike, including Senator Ron Paul, who wrote an op-ed for CNN that Sessions’ sentencing guidelines would ruin lives.

Combining methodologies and mandates

Although the Loomis case and the Sessions sentencing memo touch on very different aspects of the law, they both fundamentally affect the ability of judges to take an individualized and considered approach to sentencing. Sentencing is viewed as one of the most important responsibilities of a judge — to consider the specific circumstances of the crime and of the perpetrator in order to give sentences befitting of the crime while allowing for rehabilitation.

The risks of risk models

In Loomis, the court did recognize the importance of individualized sentencing and that algorithmic risk assessments predict recidivism risk for groups of people similar to the perpetrator, not the individual perpetrator himself or herself. Nonetheless, because the risk assessments would not be the sole basis for a sentencing decision and because judges still have the discretion and access to information necessary to counter a risk assessment, use of algorithmic risk assessments, on the whole, would still be sufficiently individualized.

In discussing Loomis, the Harvard Law Review disagreed with the court’s decision. It wrote that while the Wisconsin Supreme Court provided a procedural safeguard to warn judges of the risks of relying on algorithmic risks assessment, “this prescription is an ineffective means of altering judges’ evaluations of risk assessments. The court’s ‘advisement’ is unlikely to create meaningful judicial skepticism because it is silent on the strength of the criticisms of these assessments, it ignores judges’ inability to evaluate risk assessment tools, and it fails to consider the internal and external pressures on judges to use such assessments.”

In short, these warnings required by the Wisconsin Supreme Court are fundamentally inadequate as the algorithms used in these risk assessments cannot be disclosed and subject to scrutiny and validation. Even if they were subject to disclosure, judges may not have the skills to evaluate them. Given the perceived sense of impartiality and objectivity, judges are likely inclined to rely on these risk assessments.

But problem lies in exactly that — the perception of impartiality versus a reality of what many legal experts and researchers claim is racial bias that perpetuate systemic socioeconomic factors that lead to criminal behavior. The more that certain communities or demographics are associated with criminal activities, the greater the recidivism risk assigned to an individual belonging to that demographic. As a result, that individual is more likely to receive a heavier sentence, irrespective of the individual circumstance while hurting his or her ability to rehabilitate and reintegrate into society. This serves to further destabilize those communities, which reconfirms the biases and assumptions built into the algorithms used to sentence future similarly situated defendants.

Biases in algorithms and methodologies

ProPublica published an article where it analyzed the results and outcomes of Northpointe’s COMPAS risk assessments, discovering what it claims is racial bias in the models. The ProPublica study looked at risk scores for 7,000 people arrested in Broward County, Florida and benchmarked those scores against new crimes charged over the next two years. ProPublica found that amongst violent crime, the accuracy rate was only 20% in predicting who would continue to commit violent crimes, with the accuracy going up to 61% when all crimes, including misdemeanors, are considered. Furthermore, ProPublica found significant racial disparities, whereby black defendants were wrongly assessed at almost twice the rate as white defendants and white defendants were mischaracterized as low risk more often than black defendants. Even where race was isolated from types of crimes the defendant was arrested for, black defendants were still 77 percent more likely to be labeled as higher risk for recidivism.

Although Northpointe’s COMPAS model does not ask about race, it asks 137 questions that are either answered by defendants or pulled from criminal records including whether the defendant’s parents were ever sent to prison. Tim Brennan, one of the founders of Northpointe, said that it would be difficult to create a score that does not include items correlated with race, such as poverty, unemployment and social marginalization, as it would lower the accuracy. Even Brennan, as a witness for a defendant appealing a COMPAS-influenced sentence, testified that he had not designed COMPAS to be used in sentencing and punishment but rather to reduce crime, and felt uncomfortable using COMPAS as the sole factor in a sentencing decision. On account of Brennan’s testimony, the judge subsequently reduced the defendant’s sentence by six months, saying that he would have given the defendant that much less time had he not relied on the COMPAS risk assessment.

Displacing power from judges to prosecutors and algorithms

A similar vicious cycle can be seen in mandatory minimums in drug sentencing. While the topic of algorithmic risk assessments is relatively new, the debate on the war on drugs and mandatory minimums in sentencing have become a well-ingrained part of the cultural mindset over the past decade. Critics of mandatory minimums argue that such straight jacketed approaches to punishment takes discretion for individualized sentencing away from judges, while disproportionately punishing blacks and latinos, increasing prison populations and spending without any impact on public safety. With the rise of opioid overdoses across America, a bipartisan consensus has been forming on the narrative that the drug problem is one of public health, requiring a different approach of treatment and counseling rather than draconian punishments without regard to the individual or crime.

A paper from New York University Law School made similar parallels between mandatory sentencing guidelines and the use of algorithmic risk assessments, arguing in both cases that bad discretion and human prejudice was not eliminated. Instead, in the case of mandatory minimums, discretion was shifted from judges to prosecutors by giving prosecutors the power to determine sentencing by the types of cases they would charge the defendant and by using the mandatory minimums to leverage plea bargaining deals, a process without judicial oversight. In the case of algorithmic risk assessments, the discretion is now displaced from the judge and shifted to the statistician or even more faceless artificial intelligence behind the algorithms. Judges, at least, must be held accountable to the public through published decisions and, in some cases of state judges, through elections.

What’s next

Further studies and validations of these algorithmic risk models are needed, particularly in limiting bias and forcing models to explain their logic and outcome. In the financial services sector, where the stakes are high but nowhere as high as one’s liberty, models used to determine credit eligibility must pass compliance scrutiny, as banks are liable to regulators and statutory obligations for fair lending. Compliance must approve models before they are deployed, ensuring that there are no implicit biases built into models and that models are explainable in how they reach a particular credit decision. At the minimum, a similar independent oversight ought to be required of algorithms used in predictive policing and risk assessments. These algorithms must be validated, tested, regularly reviewed and audited, along with mandatory training to judges and prosecutors.

In a world of fast technological and political change, new and old ethical questions about the fundamental relationship between state and citizen, along with crime and punishment, become more pressing and urgent. Politicians, jurists, sociologists, criminologists, technologists, data scientists, philosophers and others need more than ever a cross-disciplinary dialogue on the future of the justice system.

Have a legal case that could benefit from crowdfunding?

Start a case on CrowdJustice today.

--

--

CrowdJustice
CrowdJustice

crowdjustice.com is a crowdfunding platform for legal cases — enabling individuals, groups and communities to come together to fund legal action.