AI and A2J webinar series: Hannes Westermann — The Hammer and the Carpenter

Nóra Al Haider
Legal Design and Innovation
6 min readJun 17, 2024

When you explore the online legal community space, you’ll notice that the topic of AI and Access to Justice (A2J) comes up quite frequently. Most agree that AI & A2J is a consequential topic. However, even though the importance of the topic is recognized, the phrase itself remains nebulous. What do we mean by AI & A2J? What kind of experiments are researchers and practitioners conducting? How are new tools and projects evaluated?

To delve into these questions, the Stanford Legal Design Lab recently initiated a webinar series. Each month, a presenter is invited to discuss a study or project in the AI & A2J space. Attendees can learn about new AI & A2J projects, ask questions, make connections, and find new ideas or protocols for their own work. The ultimate goal is to build more collaborations between researchers, service providers, technologists and policymakers.

The inaugural presenter for the webinar was Hannes Westermann. Hannes is an Assistant Professor in Law & Artificial Intelligence at Maastricht University and the Maastricht Law and Tech Lab. His current research focuses on using generative AI to enhance access to justice in a safe and practical manner.

Generative AI for Access to Justice

Generative AI can be a valuable addition in the access to justice space. Laypeople often have difficulty resolving their everyday legal issues, from debt to employment problems. This struggle is partly due to the complexity of the law. It is challenging for people to understand how laws apply to them and what actions they should take regarding their legal issues, assuming they can identify they have a legal issue in the first place.

Moreover, the cost of going to court can be high, not just financially but also in terms of time and emotional stress. This is particularly true if individuals are unaware of how the process works and how they should interact with the court system.

Generative AI could help alleviate some of these issues and create more opportunities for users to interact with the legal system in a user-centered way.

Hannes spotlighted three projects during his presentation that address these issues.

Justice Bot: Aiding Landlord-Tenant Disputes

The JusticeBot project, developed at the Cyberjustice Laboratory in Montreal, provides legal information to laypeople. The first version, available at https://justicebot.ca (in French) addresses landlord-tenant disputes. The bot asks users questions about their landlord-tenant issues and provides legal information based on their responses. Users start by indicating whether they are a landlord or tenant. Based on their choice, the system presents a series of questions tailored to common issues in that category, such as unpaid rent or early lease termination. The system guides users through these questions, providing explanations and references to relevant legal texts.

For instance, if a landlord indicates that their tenant is frequently late with rent payments, the JusticeBot asks follow-up questions to determine the frequency and impact of these late payments. It then provides information on the landlord’s legal rights and possible next steps, such as terminating the lease or seeking compensation.

The team at the Cyberjustice Laboratory collaborated with the Tribunal administratif du logement (TAL) in developing and marketing the JusticeBot. The TAL receives over 70,000 claims and over a million calls annually. By automating the initial information-gathering process, the JusticeBot could potentially alleviate some of this demand, allowing users to resolve issues without immediate legal intervention. So far, the JusticeBot has been used over 35k times.

The first iteration of the bot was built like a logic tree, where there was a logical connection between the questions and answers, making it possible to verify the accuracy of the legal information. In recent years, Westermann and his team have experimented with integrating language models such as GPT-4 into the JusticeBot (see here and here). This hybrid approach could ensure the accuracy of the information while enhancing the human-centered interface of the bot, and increase the efficiency of creating new bots.

DALLMA: Document Assistance

The next project Hannes discussed is DALLMA. DALLMA stands for Document Automation, Large Language Model Assisted. This early-stage project aims to automate the drafting of legal documents using large language models. The current version focuses on forms, as people often find them complicated to fill out. AI is utilized to fill in structured information into legal documents. Users provide basic information and context, and the AI assists in structuring and populating the document with relevant legal content. In the future, this could increase efficiency in drafting forms and other legal documents.

LLMediator: Enhancing Online Dispute Resolution

The LLMediator explores the use of large language models in online dispute resolution (ODR). The LLMediator makes suggestions on how to phrase communications more amicably during disputes. It analyzes the content and sentiment of the message to prevent escalation and promote resolution. For example, if the LLMediator detects aggressive language that could escalate the conflict, the AI might suggest more constructive phrasing. It can also suggest a potential intervention message to a human mediator, supporting them in their work of encouraging a friendly resolution to the dispute. In short, it acts as a virtual assistant that supports mediators and parties by providing suggestions while still allowing the user to make the final decision.

The Challenges of AI

The projects presented by Hannes show the promise of integrating LLM into the A2J space. However, it is important to be aware of the challenges involved in integrating such instruments into the legal system. One issue is hallucinations. This occurs when the AI generates plausible-sounding but incorrect information, which can be an issue in the legal domain. Hannes explains that this happens because the AI predicts the most probable continuation of a phrase based on its training data, but it does not guarantee accuracy. More research needs to be done to find ways to mitigate these issues. One potential solution is the conceptualization of systems as “augmented intelligence”, as demonstrated e.g. in the LLMediator project. In this approach, the AI system does not provide predictions or recommendations to the user. Rather, it provides information or suggestions that can help the users make better decisions or accomplish tasks more efficiently.

Another potential solution would be to combine AI systems with transparent, logical reasoning systems, as shown e.g. in DALLMA. This approach has the potential to combine the power of large language models with legal expert knowledge, to ensure that users receive accurate legal information. This approach could also help tackle biases that may be present in the training datasets of AI models.

Privacy is another concern, especially in the legal field, which deals with large amounts of sensitive and confidential information. This data can be sent to external servers when using large language models. However, Hannes notes that recent developments in AI technology have led to powerful local AI models that offer more privacy protections. AI providers could also offer contractual guarantees of data protection.

To make sure that AI is implemented in a safe and practical manner in the legal system, it is important to keep these and other challenges in mind. Potential ways of mitigating the challenges include technical innovations and evaluations, regulatory and ethical considerations, guidelines for use of AI in legal contexts, and education for users about the limitations of AI and the importance of verifying the information received through AI models.

Future direction

Hannes concludes his presentation by stating that generative AI should be viewed as a powerful tool that augments human intelligence. The analogy he uses is that of the hammer and carpenter.

“Will law be replaced by AI is a bit like asking: ‘Will the carpenter be replaced by the hammer?’ It just kind of doesn’t make sense as a question, as the hammer is a tool used by the carpenter and not as a replacement for them.”

AI is a powerful tool that can be a useful addition to use cases in the access to justice space. More research needs to be done to better understand the use cases and evaluate the tools. Hannes hopes that the community will engage with the systems and understand what they have to offer so that we can leverage AI to increase access to justice in a safe way.

--

--