AI & The Stanford Legal Design Lab

Nóra Al Haider
Legal Design and Innovation
10 min readJan 25, 2024

You don’t need to be a fortune teller to predict that in 2024 LLMs will play an important role in the legal field. If 2023 opened up new possibilities in how we can interact with online legal information, then 2024 will be a year where we will be able to assess and evaluate the value, user-friendliness, and efficiency of LLM models when applied to access to justice.

This is one of our key focus points at the Stanford Legal Design Lab. At the start of the year, I sat down with Margaret Hagan, the executive director of the Stanford Legal Design Lab, to chat about the lab’s upcoming projects and vision regarding AI & Access to Justice.

Nóra Al Haider (NAH): Do you remember the first time you read about ChatGPT and used it? Were there any thoughts going through your mind as you played around with this new tool?

Margaret Hagan (MH): Yeah, I think the first inkling I had of it was probably something you had shared. I think a tweet of someone saying: “Google Search better watch out, because, this is amazing. Here’s some screenshots of what I was able to do with Chat GPT.”

The Lab’s initial ChatGPT findings were documented in this piece for our Medium publication.

So at first I was like, is this just another hype? Is this just something that people in Silicon Valley get excited about and then later on we’ll all be disappointed in its practical applications?

But I think the shocking thing for me was how something that sounded so awkward and technical like ChatGPT actually took off and became publicly known, and, like this phenomenon. It still might be a fad, just like many other fads, but I think now we can tell that there is something there and it’s not going to be an insider kind of progress. So I was kind of shocked in that month between, when I first saw the tweet and then the actual uptake and holiday conversations last year at this time, where family members were talking about it. That was really surprising to me.

NAH: Was there also some concern when you saw the broader use of technology or did you see the opportunities that this could provide, in particular to communities that we center in our research, like self-represented litigants?

MH: You know, at the lab a few years ago we had started several projects exploring AI in the courts. We interviewed court leaders and talked to tech companies about AI and speeding up legal research and AI to better sort people’s cases in order to triage people. I think when I saw the capabilities of this new generation of generative AI, I thought: ‘Oh, my gosh, all these futuristic things are coming much sooner than I thought!’

All those conversations we had 5 years ago, where I thought oh that is too futuristic, that’s beyond the capabilities and also beyond the kind of interest and concern level of many of our stakeholders who run justice institutions.

In that sense there was excitement that we are in a new era and there’s a chance to have a lot of important strategic conversations with leaders that was maybe not possible 4 or 5 years ago, when the technology was not there or there was not that public awareness of AI’s potential.

NAH: Just to fast forward to the present: what kind of developments and projects are you personally interested in when it comes to AI and Access to Justice?

MH: At such a wonderful university, with so many interdisciplinary possibilities and so much expertise, our Lab is really well positioned to be an R&D lab for many use cases.

All of these things that we’ve worked on over the past 10 years, it feels like there’s this potential to use AI for them now. Think about helping the person to understand their legal issue, feel better engaged, capable and ready to take action and then able to execute complicated legal tasks, like filing paper work, writing letters, negotiating or understanding a contract. There is also an opportunity to amplify the behind the scenes work of legal aid, pro bono and court staff.

We will try to run many R&D cycles for these various specific use cases. And I think our our goal is really to foster a lot of high quality empirical work that’s not too quick, you know. The focus is not necessarily to launch new products, but it is kind of taking an empirical research based approach, that’s still very practical. Really helping our partners understand and put to the test what AI can do, fine tune models for their tasks and help them establish quality standards. I think we can be a really great sandbox R&D lab to help explore and put AI to the test for specific tasks.

NAH: That is so exciting and I guess this ties into my next question. Many readers of this publication are familiar with legal design and using a human-centered approach in their legal work. What role do you think legal design should play when it comes to AI and Access to Justice?

MH: I think it gives us a methodology to work with, in this kind of co-operative and co-design way, that as we are developing these ambitious and possibly impactful new systems, that we are doing it in a multi-stakeholder way in order to build technology that is safe and responsible. Hopefully the legal design methodology can be put in place to mitigate all those unintended consequences that people worry about and guide the new R&D work that’s going to be happening.

NAH: I know you have been talking to a lot of different stakeholders. What are some of the things that you have been hearing from them? Do they share your concerns and cautious excitement?

MH: I think there’s definitely a mixture of optimism and concern. There is worry about risks that we know about and risks that we don’t even know about. I think that’s probably a healthy approach. Obviously, we don’t want to be rushing forward, spending money or launching new tools without a really healthy, cautious approach. At the same time, many people who I have talked to, especially at that kind of strategic level who are in charge of statewide initiatives regarding access to justice, have a real interest in mapping out where the lower risk/higher impact use cases are and then start to test models out to see if it could launch into products or other user facing services.

So I think there is that excitement, but there is not necessarily a wish list of exactly what they want AI to do. We are still at that very early stage. We know it can do something, but now it’s time to start getting specific about the particular tasks. And also to think about questions, such as: who would be using it? What are the quality standards to know if it’s safe before we feel okay launching it?

NAH: There does seem to be tons of excitement across the different organizational levels. I guess my concern is, and we always discuss this at the lab, how can we make sure that we actually create systemic impact? That these projects are not just ad-hoc, one-off projects living at a particular court or mayor’s office. How can we make sure that all these new initiatives are connected in this AI and Access to Justice ecosystem?

MH: That’s such an important question. I think at this beginning stage of AI and access to justice, the more that we can establish partnerships between the different stakeholders, the better.

We need to establish strong partnerships between those who own the data that is needed to train models. These are usually courts, legal aid and other government agencies. And on the other hand we have university groups, private vendors, non-profit and technology groups that need data to design and develop models.

The more that these partnerships can be established, the more R&D experiments will take place. I’m not saying that the data has to be shared with everyone, but we do need partnerships and make sure that all these different groups are talking to each other.

As these kind of R&D experiments happen and the protocols and best practices about models are being shared, we can create more understanding about AI. Especially for the justice institutions who may feel very cautious about sharing their data or engaging in AI work.

We could create a blueprint on how to develop rules of the road and standards about the best ways to share data while still being safety and privacy focused. This way we could also link the individual projects with other similar ones, because I think we really want to avoid a world where there’s lots of different models being built. Where we would have different models all tackling the same problem. Duplicative efforts just lead to more of this patchwork nature of the justice system. Ideally, this can be an opportunity where we start to build more of an interoperability between the technology systems in the different states. But we really need to be careful with how we work now to ensure that happens.

NAH: Would this require a network similar to the Eviction Prevention Learning Lab that we worked on for over 3 years at the Lab? (Note for the readers: the Eviction Preventing Learning Lab was a nationwide network for cities to exchange knowledge and best practices regarding eviction prevention. This project was a collaboration between the Legal Design Lab and the National League of Cities. Click here to read more: https://www.nlc.org/initiative/eviction-prevention-learning-lab/)

MH: Exactly. And we can think much more internationally. Obviously, AI projects have to be sensitive to local laws, local rules, local procedures and other jurisdiction specific concerns. But I think that much of the quality standards, benchmark definitions on how to run a really good AI and Access to Justice project, the public thresholds protocols when new products should be launched for public, all those things can be cross jurisdictional. They can be international, even if there are local flavors to the projects and data. Hopefully, universities can play a strong role there. But there is also an important role here for bar foundations, access to justice commissions and other kinds of statewide entities that can get local buy-in and have local awareness about what’s happening nationally or internationally.

NAH: Let’s talk a bit about the Lab and our upcoming projects. Could you tell the readers of this article a bit more about specific AI and Access to Justice projects that we are working on at the Lab?

MH: We’re focused on 3 work streams, especially over the next 5 years, and we’ve started work on some of those in our classes and our project and research work.

The first stream is really about assessment: researching and auditing. In this track we are interested in examining the quality of the most well known AI tools, such as ChatGPT, Google Bard and Microsoft Bing, for common civil justice legal help and queries. We want to establish a regular audit that we can run yearly, if not more, to see how those models are performing in terms of legal quality. At the same time, we also want to talk to members of the public about what they want, how they use these tools and really defining the user needs in terms of their excitement or apprehension around AI.

The second work stream is having a series of R&D cycles around different legal tasks, like writing a demand letter to a landlord. For this work stream we are interested in questions, such as: can we get AI to write a demand letter to a landlord? What kind of quality does it produce? Can we fine tune a model to improve that quality? And can we measure the AI’s performance against a human’s performance? I think we’ll be running more R&D cycles for different kinds of legal tasks, whether it’s around dispute resolution, contract explanation and sorting and triaging different cases. There are so many different specific tasks that we can explore with various stakeholders.

Stream 3 is going to be all about network coordination and convening. So bringing together people to make sure there is a strong national if not also international group where protocols, findings and models are being shared and common agendas and benchmarks are being set up. I think we already have a great network in space from our past 10 years of work, and I think there’s a lot of interest. So we’re really excited to bring people together and be a central place for that kind of coordination

NAH: So then my last question, which might be a difficult one because of all these rapid developments: in 5 years time, what would you like to have accomplished in this field?

MH: I think again, I’m trying to ride the fence between too much optimism and too much cynicism. But I think I would love to see in 5 years time that the major legal help websites in every state and the case management systems in the civil courts all have a handful of AI powered interventions. Whether that’s a chatbot that can help you find the right form and walk you through the form to help you fill it out on that website or on the case management systems, an assistant to the clerks to help them review incoming cases and better sort them through to the right track.

Hopefully, in 5 years, we’ll have the first generation defined. A first generation of low risk high impact AI interventions that is helping to guide people and empowering them to do complicated legal tasks. After we’ll have the first generation defined, hopefully we’ll be on our way to tackle the more ambitious stuff.

--

--