“Law is a public good. It should not only be theoretically available, but easily accessible in fact.” Wolfgang Alschner
Wolfgang Alschner is an empirical legal scholar specialized in international economic law and the computational analysis of law.
He is a faculty member of the Centre for Law, Technology and Society at the University of Ottawa. He is co-founder of an investment treaty analytics portal and has published in leading peer-reviewed journals. His research focuses on using social and computer science methods in order to empirically investigate international law.
You are a pioneer in the computational analysis of investment and trade treaties and in particular, one of the founders of the project http://mappinginvestmenttreaties.com/. Can you explain what this is all about? From where did you get your interest in this approach?
For my PhD I wanted to analyze how international investment agreements changed over time. But I was not very keen on reading and coding more than 3000 treaties. Then one day, I presented my work at an interdisciplinary research seminar where a grad student in economics, Dmitriy Skougarevskiy, suggested that I should treat these treaty texts as data using natural language process. I did not know what he meant at the time. But this was the beginning of an incredibly rich collaboration with Dmitriy that resulted in http://mappinginvestmenttreaties.com/. The website was initially only meant to accompany a working paper, but it quickly developed a life of its own and is today used by researchers around the world to compare treaties. This experienced showed me that computer science techniques have much to offer for the study of law.
On Twitter, you took the example of the enormous size of the recent EU-Singapore Free Trade Agreement, to explains that “we need Artificial Intelligence and computational approaches to make sense of trade agreements”. Can you tell us more about how these technologies can help make international law more accessible and understandable to researchers and lawyers?
We lawyers are trained in what may be called a “close reading” of texts. We pay attention to detail, interpret terms and, at times, see legal significance in the placement of a comma. All this is extremely important — but it is not scalable.
To make sense of large amounts of legal texts, be it modern trade agreements with thousands of pages or entire bodies of jurisprudence, we have to resort to what may be called “distant reading” using artificial intelligence and natural language processing to quickly process texts that would take weeks to read closely. This becomes increasingly important as the number of cases and treaties is constantly growing.
“Distant reading” enables us mine these texts to find patterns, track change and identify outliers. In a 2016 paper, for instance, Dmitriy and I looked at the investment chapter of the Transpacific Partnership Agreement to automatically assess what elements were taken from earlier agreements and what elements were genuine innovations that deserved a closer, manual check. “Distant reading” therefore complements rather than substitutes traditional “close reading” techniques and helps us to navigate legal systems of increasing complexity.
Many lawyers are unfamiliar with programming and artificial intelligence. Should law students learn to code in universities? Or how can lawyers and legal researchers acquire the skills to do this type of analysis?
Technology plays an ever-greater role both in legal practice and in legal research. Particularly young lawyers and law students therefore have to invest in acquiring the necessary skills to be comfortable with a greater role of technology in law. That does not necessarily mean that every lawyer has to become a programmer. A willingness to learn and to stay up to date on legal technology is often enough. At the same time, an increasing number of law schools, particularly in the United States, do offer “Coding for Lawyer” courses. At the University of Ottawa, I teach a course on “Legal Data Science”, where law students learn to code and to solve legal problems in the programming language R. I am in the process of setting up a website www.datascienceforlawyers.org that makes my course materials publicly available. In the coming years, I am sure that there will be more and more online resources that will help lawyers acquire the necessary skills.
Approaches based on artificial intelligence require large, good quality and qualified data sets. What do you think of the current situation of access to international law data?
While more and more international legal materials are available online, most of them are tailored for human consumption only. To leverage artificial intelligence and other computational tools, we have to move beyond pdfs and make international law also available in more computer-friendly formats, such as html or xml. In fact, most of the research projects I am involved in, like the Text of Trade Agreements project, revolve around the digitization of texts. I would much prefer to spend my time on mining rather than digitizing texts, but the one is only possible with the other.
At the same time, there is a lot of duplication of efforts: researchers who do not want to share their data and legal information providers that prohibit bulk download in the terms of service. Resources that could be better used to analyze legal data are thus wasted on recreating the same datasets. I hope that in the future we will move toward a more open digital infrastructure where data is shared freely and texts are published for both human and computer consumption.
Have you ever heard of open law? Do you think this principle should be promoted at the international level? Do you think academics, States, international organizations, LegalTech have a role to play in opening up legal data?
Yes, definitely. Law is a public good. It should not only be theoretically available, but easily accessible in fact, including for data mining. Open law initiatives are crucial in this respect.
In my view, governments and international organizations have the primary responsibility of making law accessible. LegalTech and academia, however, can help in showing the need for open data and demonstrate the value that can be derived from it. Through partnerships we can create an ecosystem, in which public legal data fuels research and business innovation, which is ultimately to our all advantage, because it makes the law less complex and more accessible, including to those who would otherwise not have the necessary expertise or resources.
You are actively commenting and comparing the new US, Canada and Mexico trade agreement (USMCA), can you please give us some key differences between NAFTA ISDS and new USMCA ISDS procedure and does it mean that ISDS (Investor-State Dispute Settlement) will be abandoned by the States in future or they are trying to get rid of it as much as possible?
The investment law and arbitration landscape is certainly changing very quickly. The recent USMCA, which limits access to ISDS claims against Mexico and eliminates them entirely between the United States and Canada in relation to future claims, is just one example. The EU plans to exclude ISDS from future free trade agreements to ensure that they can be ratified more quickly as treaties falling into exclusive EU competence. Brazil has put forth a BIT model that does not rely on ISDS at all, and so on. In many ways, our world has become a global laboratory where states try and test an increasingly diverse array of institutional alternatives to traditional ISDS. Where the journey goes and whether it will lead to the end of ISDS is hard to say at this point. The ongoing UNCITRAL talks will be key, because they provide a forum to settle on a new consensus for the future of ISDS. In the short term though, we are likely to see further experimentation at the bilateral and regional level like the USMCA.