Top 5 applications of Large Language Models (LLMs) in legal practice

How lawyers can use LLMs to enhance productivity and efficiency when doing legal research, drafting documents, and performing other tasks

Pavlo Sydorenko
JURDEP
13 min readAug 22, 2023

--

Photo by Possessed Photography on Unsplash

AI and the future of the legal profession

Intelligent automation, also referred to as cognitive 🧠 automation, has made remarkable strides in various industries, including the legal sector. However, it is highly unlikely that robots will completely replace lawyers in the near future. Although the legal profession involves a significant amount of routine tasks that should be automated, there are a lot of complex assignments that require human judgment and interpretation. Furthermore, the importance of soft skills and emotional intelligence cannot be understated, as they are crucial in building client relationships, negotiating, or mastering the art of courtroom speaking.

Networking has always been an essential aspect of the legal profession. Can you imagine a networking event for robot lawyers? No awards and recognitions, just an endless exchange of terabytes of data 🙂

Even if AI demonstrates a deep understanding of contexts and subjective analysis, without explicit guidance, and replicates the ability to navigate complex human interactions in accordance with high ethical and moral considerations, it will still face significant barriers to entering the legal profession. Guarded by humans, these barriers will be kept for as long as possible. The paradigm shift is always a long process.

Although AI-powered tools are not expected to completely replace human lawyers in the near future, they have the potential to transform the legal profession by automating certain tasks and assisting lawyers in their work. However, a report from Gartner reveals that many legal teams are struggling to adopt advanced legal technologies and this disillusionment with sophisticated innovations often leads to a failure to embrace more mature technologies.

Source: Gartner (October 2022)

For instance, legal technologies like contract lifecycle management and legal matter management have already reached the Slope of Enlightenment on the Hype Cycle, indicating that they are maturing and have proven benefits. But, their market penetration remains below 50%. This suggests that legal teams are missing out on the advantages these technologies can offer. Furthermore, numerous sophisticated technologies are still in the trough of disillusionment, which implies that the hype surrounding them either exaggerates their current capabilities or that digital dexterity within the legal sector is not high enough, or both.

The role of AI, particularly Large Language Models, has been met with skepticism by some lawyers. Many have given up after playing with general applications like chatbots and, unsurprisingly, facing their shortcomings with regard to the legal domain. Not even mentioning the issues of hallucination and biases. The truth is that fine-tuning LLMs and integrating them with internal data sources, such as databases and collections of documents, as well as external applications and APIs, can greatly improve their performance and usability in the legal domain. However, the lack of technical skills often hinders lawyers from fully exploring these opportunities.

This also raises the question of the required level of technical dexterity among lawyers. I have already shared my thoughts on this topic in a previous publication, which you can find by following the link below. In this article, I would like to highlight the key applications of LLMs for lawyers. I hope this will contribute to improving the understanding of the transformative potential of this technology.

Five ways lawyers can use LLMs

Despite the above-mentioned shortcomings, LLMs can already provide considerable benefits to both lawyers and their clients. These applications can significantly enhance productivity and efficiency, also offering less tangible but crucial benefits such as stress reduction and preventing burnout. While law firms typically have systems in place to manage workflows and track billable hours, in-house legal teams often struggle with unmanageable workloads. By integrating LLMs with legal intake, triangle, and matter management solutions, this issue can be effectively addressed, enabling in-house lawyers to ‘do more with less’. However, it is essential to remember that most LLMs need to be customized/fine-tuned to fully unlock their potential within the legal domain.

So, lawyers can benefit the most from using Large Language Models (LLMs) or LLM-powered Legal tech solutions according to the following scenarios:

  1. Legal research

LLMs are highly efficient in analyzing large volumes of data and extracting relevant information based on your query. However, these capabilities are limited to the training corpus — i.e., the data used to train the model. Although training corpora may contain vast data, LLMs typically lack any information that emerged after their training. To overcome this limitation and obtain access to the latest regulations, legal cases, and other up-to-date data, it is possible to integrate the model with proprietary or third-party online databases. Nonetheless, an ‘out of the box’ LLM still can be a good starting point, especially when conducting research in a new field or when historical data remains relevant. For instance, let’s consider this simple example of research on the regulation of AI.

Prompt: Conduct legal research on policies and regulations related to Artificial Intelligence. Please summarize relevant regulations by region or country and include citations. Make conclusions based on your research about key developments in this field. Based on the current trends, make your forecast about how AI will be regulated in the future.

Response from the GPT 3.5 model (“gpt-3.5-turbo”):

1. European Union (EU):

The EU has been at the forefront of AI regulation. The European Commission released the “White Paper on Artificial Intelligence” in 2020, proposing a regulatory framework for AI. The proposed regulations aim to ensure AI is trustworthy, ethical, and respects fundamental rights. The EU is also considering a potential ban on certain high-risk AI applications. [1]

2. United States (US):

The US lacks comprehensive federal AI regulations, but there are sector-specific regulations and guidelines. The Federal Trade Commission (FTC) has issued guidelines on AI transparency, fairness, and explainability. Additionally, several states like California have implemented legislation related to AI, such as the California Consumer Privacy Act (CCPA). [2]

3. China:

China has been actively developing AI regulations. The country has issued guidelines and standards for AI ethics, data protection, and cybersecurity. The Chinese government is also promoting the development of national AI standards and has implemented regulations on facial recognition technology use. [3]

4. Canada:

Canada has adopted a proactive approach to AI regulation. The Canadian government has released the “Directive on Automated Decision-Making,” which provides guidelines for the use of AI in the federal public administration. Canada is also working on a national AI strategy and has established the Pan-Canadian Artificial Intelligence Strategy. [4]

5. Japan:

Japan has been focusing on AI regulation to promote innovation while ensuring ethical and responsible AI development. The Japanese government has established the “AI R&D Guidelines” and the “AI Utilization Principles.” Additionally, Japan has implemented the “Personal Information Protection Act” and the “Act on the Protection of Personal Information” to safeguard privacy in AI applications. [5]

Regarding the future of AI regulation, it is likely that more countries will adopt comprehensive AI regulations to address concerns related to ethics, privacy, and accountability. International collaboration and harmonization of AI regulations may also become more prominent, considering the global nature of AI development and deployment. As AI technologies continue to evolve, regulations may need to adapt to new challenges and developments, such as the increasing use of AI in autonomous vehicles, healthcare, and financial sectors.

Please remember to consult official sources and legal experts for the most accurate and up-to-date information on AI regulations in specific regions or countries.

[1] European Commission, “White Paper on Artificial Intelligence,” 2020.

[2] Federal Trade Commission, “Using Artificial Intelligence and Algorithms,” 2020.

[3] National Development and Reform Commission of China, “New Generation Artificial Intelligence Development Plan,” 2017.

[4] Government of Canada, “Directive on Automated Decision-Making,” 2019.

[5] Ministry of Internal Affairs and Communications of Japan, “AI R&D Guidelines,” 2019.

In addition to connecting the model to up-to-date data sources, you can improve the generated response with prompting. For example, let us try to narrow down the scope of our research.

Prompt: Please conduct legal research on policies and regulations in the EU related to Artificial Intelligence. Please include citations.

Response from the GPT 3.5 model (“gpt-3.5-turbo”):

Title: Policies and Regulations in the European Union related to Artificial Intelligence

Introduction: The European Union (EU) has recognized the importance of Artificial Intelligence (AI) and its potential impact on society, economy, and fundamental rights. Consequently, the EU has been actively developing policies and regulations to ensure the responsible and ethical use of AI. This legal research aims to provide an overview of key policies and regulations in the EU related to AI, with relevant citations.

1. General Data Protection Regulation (GDPR): The GDPR, implemented in May 2018, is a comprehensive data protection framework applicable to all EU member states. While not specifically designed for AI, it contains provisions that impact AI technologies. Article 22 of the GDPR addresses automated individual decision-making, including profiling, which is relevant to AI systems that make decisions without human intervention. It provides individuals with the right to not be subject to solely automated decisions, and the right to obtain human intervention. [1]

2. Ethical Guidelines for Trustworthy AI: The European Commission’s High-Level Expert Group on AI (AI HLEG) published the “Ethical Guidelines for Trustworthy AI” in April 2019. These guidelines aim to promote the development and deployment of AI that is trustworthy, respects fundamental rights, and complies with legal obligations. The guidelines emphasize transparency, accountability, and human oversight in AI systems, and provide practical recommendations for developers and deployers of AI. [2]

3. AI Act Proposal (Regulation on a European approach for Artificial Intelligence): The European Commission proposed the AI Act on April 21, 2021, as a comprehensive regulation to govern AI systems in the EU. The Act aims to establish a harmonized legal framework to ensure AI’s safe and ethical use across various sectors. It introduces four categories of AI systems: unacceptable risk, high risk, limited risk, and minimal risk. High-risk AI systems, such as those used in critical infrastructure, healthcare, and law enforcement, would be subject to stricter requirements, including conformity assessments, data governance, and human oversight. The AI Act also prohibits certain AI practices, such as social scoring and subliminal techniques. [3]

4. Regulation on Machinery Products with AI (Machinery Directive): The EU’s Machinery Directive (2006/42/EC) is a product safety legislation that applies to machinery placed on the EU market. In 2021, the European Commission proposed an amendment to the Machinery Directive to address AI-specific risks. The amendment aims to ensure that AI incorporated into machinery meets safety requirements, including transparency, accountability, and human oversight. [4]

5. European Data Strategy: The European Data Strategy, published in February 2020, highlights the importance of AI and outlines the EU’s vision for data-driven innovation. It emphasizes the need to unlock the potential of AI while ensuring respect for privacy, data protection, and fundamental rights. The strategy promotes the development of common European data spaces and the facilitation of data sharing for AI research and development. [5]

Conclusion: The EU has taken significant steps to develop policies and regulations that address the ethical, legal, and societal implications of AI. The GDPR, Ethical Guidelines for Trustworthy AI, AI Act Proposal, Machinery Directive, and European Data Strategy are key examples of the EU’s efforts to ensure the responsible and ethical use of AI within its member states.

Citations:

[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[2] European Commission, “Ethical Guidelines for Trustworthy AI,” April 2019.

[3] European Commission, “Proposal for a Regulation on a European approach for Artificial Intelligence,” April 2021.

[4] European Commission, “Amendment to the Machinery Directive 2006/42/EC,” April 2021.

[5] European Commission, “A European Strategy for Data,” February 2020.

Please check some tips on prompting for lawyers below.

2. Drafting legal documents

LLMs can generate high-quality drafts of the most popular types of legal documents and letters, even without fine-tuning and other customization. These models are typically trained on terabytes of text, thus they can easily generate Non-Disclosure Agreements, Lease Agreements, License Agreements, and other standard contracts, right out of the box. However, it is highly recommended to review drafts generated by LLMs before sending them to clients or signing 🙂, as there is a possibility of errors caused by an ambiguous context or AI hallucination.

Nonetheless, LLMs serve as an excellent starting point, particularly if you do not have your own extensive collection of documents or when pioneering a new domain. By showing your previous drafts as examples to a model (assuming they are not already part of its training data corpus), it can mimic your style and meet your specific requirements regarding the structure of the document. You just need to instruct the model properly.

3. Litigation analytics and Case prediction

One of the most promising yet controversial applications of AI in the legal domain is the prediction of litigation outcomes. Machine learning algorithms have already been used to analyze litigation data, identifying trends, patterns, including in the behavior of judges and opponents, and successful litigation strategies in specific case types or jurisdictions. LLMs can contribute to these efforts with their advanced capacity to analyze and generate texts, assisting lawyers in finding the most effective wording and overall argumentation. Overall, both lawyers and clients can get valuable insights to make informed decisions and develop effective strategies. For example, based on the prediction of success rates, risks, and costs, clients may decide to settle rather than litigate.

One of the main challenges with using AI for predicting litigation outcomes is the difficulty in validating the results in advance. This means that a flawed strategy may only be realized when it is too late. Further complexity is added by the involvement of multiple agents. While providers of such Legal Tech solutions claim that AI can help lawyers anticipate opposing counsel’s moves and prepare a superior strategy, what happens when the opposing counsel also applies AI-powered tools? This situation can resemble a Keynesian beauty contest, where everyone tries to predict the evaluations of others.

While this may not be a significant issue for standard, straightforward cases with predictable outcomes, it becomes increasingly challenging when two opposing parties have ample resources and a strong motivation to win, especially when there is a lot at stake. Can you imagine yourself explaining to a client that you lost the case because of a bad AI prediction? Of course, these complexities are multiplied in criminal and other cases where there is no business, just personal.

Furthermore, it is important to note that in certain jurisdictions this application may face stricter regulations. For example, in March 2019, the evaluation, analysis, comparison, or prediction of the behavior of individual judges was banned in France. This is a good illustration of the potential power of this technology.

Despite these controversies and drawbacks, Litigation analytics and Case prediction remains one of the most exciting and promising application of AI in the legal domain.

4. Due Diligence, Compliance, and E-discovery

AI-powered tools can streamline the due diligence and compliance process by automating the review of contracts and other relevant documents. This allows lawyers to efficiently identify potential risks, ambiguous clauses, hidden obligations, or non-compliance issues, saving time and reducing human errors. To fully leverage the benefits of this application, it is essential to connect the LLM to external data sources, as compliance requires monitoring changes in laws, bylaws, regulations, and corporate policies. However, even without external data sources, the model can still detect anomalies in a provided document.

5. Expertise automation

Question answering over a document or any other data source is another powerful feature of LLMs. It combines the model’s proficiency in understanding natural language, which is used to ask a question, with its ability to analyze text in data sources. This means that LLMs can not only retrieve the most relevant information but also provide a human-like response with explanations of their findings. This feature can be particularly useful for in-house lawyers in large companies who often spend a lot of their time explaining internal policies and regulations to other employees. For example, if an LLM is provided with access to internal documents, it will be able to answer questions about the content of such documents as a legal support bot. It is important to note that in this case, the model may be instructed to ignore its other knowledge to avoid potential conflicts with external sources of information.

Prompt 1: What is the population of Mars?

Response from the GPT 3.5 model (“gpt-3.5-turbo”):

As of now, there is no known population on Mars as it is not currently habitable for humans. However, there are plans for future human missions to Mars, which may eventually lead to a population on the planet.

Now, let us instruct the model to use the provided information when answering the question. Our fake_wiki file includes the following line: “The population of Mars is 156,734 muskians”.

Prompt 2: Based on the information provided in fake_wiki, tell me about the population of Mars?

Response from the GPT 3.5 model (“gpt-3.5-turbo”):

Based on the information provided, the population of Mars is 156,734 muskians. However, without further context or information, it is difficult to provide a comprehensive description of the population of Mars. Additional details such as the composition of the population, their characteristics, or any other relevant information are necessary to provide a more detailed analysis.

Conclusion: The trap of junior lawyers

LLMs have already demonstrated better performance in certain tasks compared to many ‘fresh’ law school graduates, and their capabilities to understand natural language continue to evolve. It poses the following concerns. First, there is a risk that junior lawyers may overly rely on this technology, damaging their own learning curve. After years of reviewing, paraphrasing, and refining GhatGPT outputs, they may become skilled prompt writers rather than lawyers. Second, if LLMs improve much faster than junior lawyers progress in professional development, coupled with wider adoption of legal tech tools, senior lawyers may find it more efficient to proofread the output of LLMs rather than the work of junior lawyers.

In a broader sense, it is worth considering whether all clients are willing to pay lawyers the same rates for reviewing AI-generated documents as they did previously for drafting documents from scratch. This raises numerous questions that require further exploration. Perhaps it is necessary to write another article to address these concerns and delve deeper into the above-mentioned applications of LLMs in the legal field.

Thank you for reading. Please clap 👏, if you find it interesting.

--

--

Pavlo Sydorenko
JURDEP
Editor for

Head of Legal Ops & Analytics for an in-house team of over 500 lawyers | 15 + years of overall experience in Analytics | Ph.D. in International Economics