Key learnings of ICAIL2019

Adèle Gillier
Oct 8, 2019 · 6 min read

by Pauline Chavallard and Adèle Gillier

We were lucky to participate in the 17th International Conference on Artificial Intelligence and Law in Montreal last June. This conference has been organised by the International Association for Artificial Intelligence and Law (IAAIL) for the past 30 years. IAAIL is a nonprofit association devoted to promoting research and development in the field of AI as applied to law. It provides a forum for the presentation and discussion of the latest research results and practical applications, fostering interdisciplinary and international collaboration. This conference is focused on AI as applied to law, which is precisely Doctrine’s core business.

Doctrine is a legal platform that provides legal information and insights, and in particular on court decisions. We use a lot of different NLP techniques to understand legal documents, an example here: https://blog.doctrine.fr/structuring-legal-documents-with-deep-learning/. As such, we try to stay up to date with state of the art in the field through training, conferences and meetups (we are the organisers of the Paris NLP meetup, the biggest meetup about Natural Language Processing in France).

AI as applied to law is a very broad subject at the crossroads of a lot of different fields. At ICAIL, we had the chance to hear about: Natural Language Processing techniques, but also legal reasoning and ontology. In this blogpost, we will focus on our key learnings.

Best overall paper

Why Machine Learning Leads to Unfairness in Juvenile Justice: Evidence from Catalonia

Marius Miron, Songül Tolan, Emilia Gomez & Carlos Castillo, European Commission

What was interesting during this conference is the number of talks about fairness and bias. In the legal domain especially, this is a huge challenge to tackle.

This paper studies the fairness of machine learning algorithms using LIME (Local Interpretable Model-Agnostic Explanations) and suggests some features where fairness is more present. It then compares the misclassification rates of male and female to measure “fairness” in recidivism. It also showed that using features like age, sex, country to predict criminal recidivism leads to unfairness.

https://www.slideshare.net/MariusMiron2/why-machine-learning-may-lead-to-unfairness

Best application paper

Semi-Supervised Methods for Explainable Legal Prediction

Luther Karl Branting, Craig Pfeifer, Lisa Ferro, Alex Yeh, Bradford Brown, Brandy Weiss, Mark Pfaff & Amarty Chakraborty, MITRE Corporation

The interesting point of this paper was to show that attention is no explanation.

The goal is to provide explainability about the outcome of a court decision by highlighting relevant sentences. They showed that highlights by attention mechanisms were not relevant to give explainability. They then followed a semi-supervised approach to build a dataset: a few manual annotations, and then labeled all sentences with embeddings close to the labeled ones (SCALE project: semi-supervised case annotation for legal explanation). It showed very promising results.

Invited Speakers

Several speakers talks about ethic in AI, especially the three invited speakers: Yoshua Bengio, Pim Haselager, and Bart Verheij.

Yoshua Bengio, (Mila), co-recipient of the 2018 ACM A.M. Turing Award, gave an inspiring talk about AI: the last impressive breakthroughs but also the dangers we may face such as manipulation from advertising or reinforcement of social biases.

The main challenge is what he calls the “wisdom race”: collective and individual wisdom has increased, but not fast enough to catch up with the rise in power of the tools we are building, which enable power concentration and can be dangerous on the long run.

Regarding law, he thought that automatic recommendation system can help judges but that they will never replace them because they have more information and understand the whole picture.

Pim Haselager, Radboud University Nijmegen, talks about the aspects of Human — AI Interaction.

This talk discusses how we can ensure that the human aspect remains important in all the automated processes. Technology sometimes puts humans, for regulatory purposes, in a place where it’s difficult to achieve sufficient control. This can create a form of entrapment where humans, in a position where they cannot deploy the necessary attention, are they are still legally responsible, triggering accidents by design.

In such a context, humans will tend to follow recommendations from an AI system because if they don’t, they will have to justify their decisions, creating what the speaker calls a “burden of proof”. There is then a need for a criteria to evaluate human intervention, to avoid such situations. Humans need to be educated about the shortcomings, issues and consequences of AI systems to make their decision.

For Bart Verheij, President of IAAIL, AI should be socially aware, responsible and explainable. The topics to address to do this are: reasoning, knowledge, learning & language. AI must address all these challenges, with common sense, interpretation, understanding, responsibility and explainability.

COLIEE competition

The Competition on Legal Information Extraction/Entailment (COLIEE) is an open competition taking place every year. The 2019 tasks covered information retrieval, similarities between legal cases or paragraphs and questions answering. The participants explain their methods during a workshop at ICAIL.

The best results come from complex NLP models (a lot of them using BERT), but the dataset was not large enough to have better results using deep learning methods. Most of the participants wanted to have more legal text extracts to be able to compute embeddings pertaining to the legal domain. Moreover, the difficulty is that legal documents are very long, which is a drawback for classic NLP methods.

We face these issues at Doctrine as we work on legal documents, and this is what makes these problems really interesting to solve.

Some interesting business applications in law

  • Using unsupervised clustering techniques to identify arguments in legal documents, Prakash Poudyal, Teresa Goncalves and Paulo Quaresma

This paper provides an unsupervised framework to cluster sentences related to the same legal argument, using fuzzy clustering algorithm. It finds an algorithm to detect the number of arguments (number of clusters) in the decision. An interesting result to share is that Word2Vec feature performs the best on the clustering.

  • Query reformulation to maximise the relevancy of the answer, Arunprasath Shankar and Venkata Nagaraju Buddarapu, Lexis Nexis

The goal of this poster was to present a framework for query reformulation: how to modify a query to improve the quality of search engine results. They used a neural machine translation approach, and made dataset augmentation using an algorithm that introduces misspelling (using Needleman-Wunsch algorithm).

The goal of this paper was to detect shift-in-view in arguments in a same case-law. They used very different techniques, like sentiment analysis, verb relationships and inconsistency between triples.

Finally, we would like to thank the organisers of ICAIL, the Cyberjustice Laboratory and IAAIL, but especially our company, Doctrine, for giving us this opportunity to learn a lot and meet interesting people in our field, which allows us to stay informed on the last technical and scientific developments.

Adèle Gillier

Written by

Inside Doctrine

Stories from the people who build Doctrine

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade