Will AI Ever Enter the Courtroom?

Tannya Jajal
Mapping Out 2050
Published in
8 min readAug 6, 2020

In 2017, U.S. state trial courts received a gastronomical 83 million court cases.

The Chinese Civil Law system sees over 19 million cases per year, with only 120,000 judges to rule over them.

In the OECD area (consisting of most high-income economies), the average length for civil proceedings is 240 days in the first instance; the final disposition of cases often involves a long process of appeals, which in some countries can go up to 7 years.

It’s no secret that the judiciary system in many countries is long, tedious, slow, and can cause months of misery, pain, and anxiety to individuals, families, corporations, and litigators.

Moreover, when cases do see the light of day in court, the outcome is not always satisfactory, with high-profile cases especially receiving criticism for being plagued by judge biases’ and personal preferences. Scholarly research suggests that in the United States, judges’ personal backgrounds, professional experiences, life experiences, and partisan ideologies might impact their decision-making.

One thing is clear: judiciary systems across the globe are in desperate need of reform.

AI & automation might just be the solution.

Bias & Fatigue in the Courtroom

Let me start by saying that judges are, after all, human beings. Juries that vote on rulings too, consist of human beings. What this means is that judges and juries alike experience the same pitfalls that you or I do: like us, their perceptions, expectations, and biases color the way that they see the world.

Even for those who consider themselves fairly egalitarian and open-minded, implicit bias can creep up on even the best amongst us. As one paper argues, given that cognitive and social psychologists have demonstrated that human beings often act in ways that are not rational, implicit biases in the courtroom might be even more pervasive than explicit ones.

Being human also means that we’re susceptible to human-like weaknesses, such as fatigue, sleep-deprivation, and foggy thinking. A controversial study from 2012, found that when deciding upon whether or not a prisoner should be granted parole, the percentage of rulings in favor of the prisoner dropped from 65% to zero within each decision session depending on how soon after a break the decision-making occurred.

In other words, following long sessions without breaks, a hungry judge may rule unfavorably regardless of the facts of the case.

What the Research Tells Us

Scholars have found that the mere presence of a black judge could change how an appellate panel deliberates; meaning that observing a black judge cast a vote might encourage white colleagues to vote differently.

In estimating the relationship between gender and judging, researchers Gill, Kagan & Marouf found that all-male appeals panels hearing immigration appeals are much harsher with male litigants than they are with female litigants.

Additionally, when studying implicit biases, researchers found that white judges show strong implicit attitudes towards favoring whites over blacks.

These studies, among many others, indicate that the “lived experience” of the judge may have some impact on the judge’s decision-making. In multiple speeches, U.S. Judge Sonia Sotomayor made an infamous comment about this that garnered a lot of controversy, but captures the concept quite well:

“I would hope that a wise Latina woman with the richness of her experiences would, more often than not, reach a better conclusion.”

We can’t deny that all of us are inevitably subject to cognitive biases at some point or another. Either way, whether intentional, unintentional, explicit or implicit, subjectivity, and bias in the courtroom is a difficult thing for us to come to terms with.

The fate of thousands of individuals, after all, lies in the hands of people who are susceptible to their perceptions & poor decision-making just as you or I might be.

The Case for Artificial Intelligence in the Courtroom

This is exactly where automation and AI come into play.

The applications of AI & automation in the courtroom are two-pronged, intended to address two key issues in judicial systems.

First, when it comes to bias, robo-judges will be able to bypass human shortcomings.

As I wrote in a previous article, advances in deep learning might potentially give rise to what we perceive as “human-level general artificial intelligence”.

Given the multiple data points that a deep-learning AI is exposed to, it could have the ability to tap into neural networks that, much like the human brain, can make observations, patterns, decisions, and judgments.

However, unlike the human brain, deep-learning AI systems will be able to parse so many data points that it could eliminate the probability of bias. A robo-judge would have the ability to sift through years and years of historical case data as well as assess all of the facts of a case that it can then feed to its decision-trees.

These decision-trees, a part of its neural network, will ultimately help the AI achieve the goal that it is programmed to achieve: in this case, the goal could be to deliver a ruling, estimate the appropriate the length of a sentence, a pardoning, an appeal, and so on.

We’re already using algorithms and technologies like IBM’s Watson to make an evidence-based analysis of risks in all sorts of industries: finance, healthcare, manufacturing.

We can similarly use AI to determine the likelihood, for instance, of a convicted felon to go back to repeating a crime, based on historical data that the AI can access. Unlike a human judge, the hope is that a robo-judge would be able to make an objective decision based on all of the data points and facts of a case that it has.

Because an AI is unlikely to have the so-called “lived experience” that a human judge would, the chances of biased decision-making may dramatically decrease.

Moreover, as we’ve seen with the present-day application of automated tools, machines bypass common human weaknesses such as fatigue.

Secondly, as a precursor to robo-judges, AI & automation tools can be leveraged in the short-run to aide human judges to make effective decisions.

The use of automated tools could drastically reduce the time that it takes to gather the facts of a case and historical data on similar cases. For a more advanced and nuanced application, AI systems can also help us detect lies from truth in a more effective way. The current tools that we use, such as the individual judge’s perception & the polygraph, are too inconclusive and unreliable to use in the court given the multiple factors that can affect the results.

Today, AI is already being used to revolutionize mental health treatments by detecting things that human therapists can’t.

Consider Ellie, a virtual therapist launched by the Institute for Creative Technologies. Ellie, who was designed to treat veterans with PTSD, can not only detect verbal cues but can also pick up non-verbal cues (facial expression, gestures, micro-expressions) that may be difficult for a human therapist to pick up. Based on these cues, Ellie makes recommendations to her patients. As one can imagine, Ellie has a lot more subtle data to base her recommendations on than a human therapist might.

Similarly, virtual avatar judges — conceptually designed to conduct face-to-face interactions via video conference tools — may be able to pick up on cues that human judges or litigators would otherwise miss.

AI in the Courtroom: Today

Estonia and China are two countries that have already begun to pilot the use of AI in the courtroom.

The Estonian Ministry of Justice is working on a project to build “robo-judges” that can adjudicate small claims disputes. Conceptually, the two parties would upload their documents and data onto the system, and the AI will issue a decision that can if needed, be appealed by a human judge.

Similarly, China has already introduced over 100 robots to its courts — these robots retrieve past verdicts and sift through large amounts of data.

China has also introduced Xioafa, a robot that can offer legal advice to the public and help break down complex legal terminologies for the layman.

The Challenges & Reasons for Push-back

The looming question over all of this, especially if the goal is to eliminate bias in the courtroom is: can programs be neutral actors?

One argument against program bias is that the set of questions being posed to an AI are always from the same demo-graph — young, white, male programmers who typically write these algorithms & feed data to the AI.

But as the technology evolves, we might find ways to make it fool-proof. Programs may be able to test themselves against discrimination. The key advantageous factor that machines have over humans is that machines have the ability to store, compute, and account for hundreds of thousands of data points.

Let’s consider the example of ZestFinance, a credit loaning company that aims to avoid discriminatory lending. ZestFinance was founded on the idea that by looking at tens of thousands of data points, ML programs can expand the number of people deemed creditworthy. The machine learning model is run through ZAML Fair to ascertain if there are any differences across protected classes and if there is, what variables are causing those differences. To test against discrimination, the lender can increase and decrease the influence of the variables to lessen bias and increase accuracy.

ZestFinance takes into account that income is not the only predictor of credit-loaning worthiness; it’s the combination of income, spending, and the cost of living in any city.

It can be difficult to come to terms with the decisions made by a robo-judge, especially if they are adversary, negatively affect an individual’s life, and are still speculated by the public to contain some bias.

For this reason, in the interim, to ease into the use of AI in the court system, it might make sense to make room for a human judge to appeal a decision made by a robo-judge, if needed.

But eventually, as is the case with the evolution of any controversial idea, we might realize that the bias and margin of error by a robo-judge is significantly lower than that of a human judge. Just like self-driving cars can help prevent and save hundreds of thousands of lives, so too can robo-judges.

In the meantime, what we, as humans, need to be focusing on is the reasonable and responsible use of AI in society.

We need to build a body of law and ethics that can address the many challenges and changes that will come with a future in which we co-exist with machines.

This way, when it happens, we’re ready, and when we leverage technology, it’s for the benefit of society at large, rather than its detriment.

--

--

Tannya Jajal
Mapping Out 2050

Founder of AIDEN, a think tank that solves the $8.8 Trillion employee disengagement problem. www.aiden.global https://technophilosophy.substack.com/