Loading…
0:00
8:01

In 1970, Lyudmila Terentyevna Aleksandrova lost her right hand. It happened at work, where she was employed by the Russian state. With her hand gone, she fought for a disability allowance that never materialized, batted about by district and regional courts. Eventually, after decades of frustration, she brought the case to the European Court of Human Rights, which ruled in 2007 that there had been a violation in Aleksandrova’s right to a fair trial. Pay the money, it told Russia.

A judgment about Aleksandrova and her missing right hand was also made in 2016 — not by a human, but by an artificial intelligence. It read the legal document, word for word, and thought about which way it should rule. It also read 583 other cases, all from the European Court of Human Rights. It came to the same conclusion as the human judge: that Russia had violated Aleksandrova’s human rights.

A ‘bag of words’

This A.I. exercise was part of an experiment. A team of computer scientists from University College London had developed a system to accurately predict the outcome of real-life human rights cases. They trained a machine-learning algorithm on a set of court decisions around torture, privacy, and degrading treatment, viewing each legal document as a “bag of words” to be analyzed.

“Let’s say you want to do sentiment analysis on a movie review,” Nikolaos Aletras, one of the authors of the study, tells me. “As humans, what we basically do is read it [and make a judgment]. If you wanted to make that more abstract, you could count how many positive and negative words there are in the text and then decide. If I have more positive words than negative words, then this review might be positive.”

Their work suggests that, given enough data, machines can forecast legal decisions.

“We apply the same techniques on legal texts,” Aletras adds. “By looking into past cases, we learn the importance of particular words in a case.” Once trained, the A.I. was let loose on cases it had not yet seen. It came to the same decision reached by human judges 79 percent of the time. In 2017, a separate analysis was made into 199 years’ worth of decisions by the U.S. Supreme Court, with an algorithm learning from 28,009 cases and predicting the outcomes with just over 70 percent accuracy.

Neither the authors of this study nor Aletras and his colleagues claim that their models should replace human judges. But their work suggests that, given enough data, machines can forecast legal decisions.

Reasonable doubt

“Over the last two or three years, we have definitely seen a renewed interest in the use of A.I. in the judiciary,” explains Burkhard Schafer, professor of computational legal theory at the University of Edinburgh in Scotland. “There are some good arguments for it. We know our justice system is painfully overburdened, and this can only become worse in many ways.”

In the U.K., swathing cuts in funding for legal aid, which covers legal costs for people on low incomes, have decimated provisions for society’s most vulnerable. Government funding for local services has been cut by 49 percent since 2010, and the resulting pressures are a grim backdrop to the development of computer systems for predicting child abuse.

In the United States, predictive models have also been used, controversially, as tools for judges. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system, developed by a private Michigan-based company called Equivant and widely used to weigh a defendant’s risk of committing another crime, has become the center of an ongoing debate around racial bias.

Indeed, much of the discussion around A.I. and law has been about the scope for these systems to adopt and even magnify prejudices in the data they learn from. Lilian Edwards, a professor at Newcastle Law School in the U.K., tells me there have been efforts to develop “fair” algorithms, which might discount race from the equation, for example, but they still struggle to remove biases. “What you will find is that patterns of discrimination that are actually based on race or poverty might be reconstructed by a bunch of proxies, like postcode or number of children in a certain size of house.”

More insidiously, could it change the character of justice? Even if an A.I. can make the “right” verdict on a case, should it?

Against the background of the government’s “austerity” budget cuts and an overburdened legal system, the rollout of A.I. in law feels inevitable, despite concerns over its shortcomings. At the tail end of 2018, the European Commission for the Efficiency of Justice published a charter into the use of A.I. in judicial systems, calling for core principles of nondiscrimination, transparency, and respect for fundamental rights — the hope being that we’re at an early enough stage for frameworks like these to be recognized.

And it’s certainly not all foreboding dystopias. There are areas where automation could be of benefit, such as “small and simple cases with repeating factual scenarios,” says Charles Ciumei, QC, a barrister at Essex Court Chambers in the U.K. “Not that they are not important—they could be incredibly important to the people involved, but not of societal importance. It might be where the participants are likely to accept a quicker and cheaper resolution system than going to court.”

Say you lost your right hand in a work accident, much like Aleksandrova. Maybe you would want a way to get your disability allowance without having to appeal to a series of district and regional courts, eventually having to go to the European Court of Human Rights. An A.I. could process your statement, look at the evidence, and pop out an answer.

Versions of this already exist, to some extent. The DoNotPay app, created by Stanford student Joshua Browder, lets its users contest parking fines, fight credit card fees, and sue in small claims court for up to $25,000, all within the app and all without having to pay for a lawyer. The app’s pitch: “Fight corporations, beat bureaucracy, and sue anyone at the press of a button.”

What is a judge?

Quick and cheap access to justice is hard to argue against, but there are still questions about where this path leads. If it becomes so easy to seek legal action, could it foster a culture of perpetual micro lawsuits? More insidiously, could it change the character of justice? Look at the ways instantaneous communication and algorithmic predictions have affected public and political discourse. Even if an A.I. can make the “right” verdict on a case, should it?

What do you think of when you think of a judge? A furrowed brow. The final word. But perhaps just as important is the idea of a judge as a pair of ears.

“We give a certain symbolic significance to judges,” Schafer says. “They are more than technicians of the law. It is a place where the state says it is listening to you as a citizen. Is it the right way to treat vulnerable people in our society, to not allow them a sympathetic ear, even if the decision [they get from an automated verdict] is objectively right?”

It’s the sentiment echoed by Ciumei, who says the process hinges on citizens feeling like they’ve had a fair hearing. “It’s a bit like banking — it’s a question of trust. We all believe in the system to operate fairly, and that maintains public order. One of the features of that, from my own experience, is that people need to be listened to. The process itself is an important part of its function.”

For all the fear of A.I. doling out the wrong sentences, perhaps the real concern comes from what happens if they deliver the right ones. What if an ever-greater degree of automation, pushed by cuts to human services, erodes trust in the judiciary and stops us from seeing the law as a verdict handed down from society, but rather as the result of an incomprehensible equation?