Ethically Automating Legal Practice

Can AI automate legal practice?

Sylvia E
Fair Bytes
11 min readJul 12, 2021

--

Photo by Tingey Injury Law Firm from Unsplash

The year is 1666, and Gottfried Leibniz has just written his senior thesis: Can “perplexing” legal cases be solved through logic and combinatorics? Though modern computational systems wouldn’t be introduced until three hundred or so years later, Leibniz’s belief in the potential of systemic and formalized knowledge to transform systems has held true (Wolfram).

In 1973, a legal technology company called Lexis introduced the “UBIQ” terminal, which allowed lawyers to search for case law online. Document searching would soon advance into automated document creation, then into litigation support software and case management systems.

In the present, the way lawyers perform legal tasks is being transformed by artificial intelligence. The automation of legal tasks via artificial intelligence allows lawyers to automate arduous and time-consuming labor like research (SpeedLegal); the additional available time then allows lawyers to delegate more personal attention to their clients. Since its genesis in the 2010s, legal AI has transformed legal practice by predicting outcomes in the courtroom, performing contract reviews, and answering legal questions through chat-AI.

Image From Altman Weil, 2017

Law firms, especially large ones, are increasingly making use of legal AI or exploring legal AI.

Though only 8% of legal firms are currently using legal AI, 29% of legal firms have begun to explore uses for legal AI.

55% of Tier One law firms (firms with >1000 lawyers) already use legal AIs.

As legal AI advances, Clients increasingly seek out firms that optimize their costs, promote transparency, and take advantage of tech solutions (Benady).

AI in legal practice is likely to continue, expand, and become commonplace.

While proponents of legal AIs are excited about this growth in legal AIs, criticism remains on whether this increase in legal efficacy will come at the cost of ethical practice. To evaluate this debate, we ask the following questions:

  1. How will legal AIs affect the legal environment?
  2. Can legal AIs adequately fulfill existing legal and ethical responsibilities in the practice of law?
  3. To what degree should legal tasks be automated?

How will legal AIs affect the legal environment?

A key part of understanding how legal AIs will affect the legal environment is understanding what functions legal AIs are capable of.

While some legal AIs aren’t fully developed, it’s projected that legal AIs could fully automate the following functions:

Technology-Assisted Review (TAR)

Technology-Assisted Review (TAR) uses predictive coding to train a computer to recognize which documents contained within a database are relevant. TAR software learns which documents are relevant by reviewing a “seed” set of data created by human reviewers. As human reviewers mark documents as “relevant” or “irrelevant,” predictive coding makes logical rules for what makes a document “relevant” or “irrelevant.” As the TAR software receives more and more data, it adjusts its rules and becomes more accurate. Through predictive coding, TAR enables legal practices to synthesize and organize vast amounts of data 50 times more efficiently than human review (Marchant).

Litigation Analytics

Litigation Analytics uses algorithms to predict the probability of winning a case or receiving a certain verdict (Marchant). The algorithm does this by comparing data from prior court cases in U.S. courts’ public records to the current case at hand. Litigation analytics help lawyers decide whether they should plan to pursue a case and allow them to adapt their strategy (Reuters). Litigation Analytics can also analyze law firm billing information to help law firms become more cost-efficient (Marchant).

Practice Management Assistants

Practice Management Assistants use natural language processing, a subfield of AI that analyzes and manipulates speech and text, to “read” documents and provide legal analysis by cross-checking different documents.

One of the most famous PMAs is RAVN, which helps analyze data for real-estate deals. RAVN is said to be 12 million times quicker than an associate doing the same task, automating 2 weeks of work in 2 seconds (Marchant).

“Wrongdoing” Detection

“Wrongdoing” detection uses predictive coding to search company records for possible bribery, fraud, compliance, and litigation. This AI has the capacity to summarize data, find code words, analyze the frequency of communication contained within documents, and analyze the mood of the speakers (Reuters). However, some of these AIs draw from historical crime data, which often contains embedded biases. Without close supervision, “Wrongdoing” Detection tools could falsely condemn the innocent (HBR).

Legal Bots

Legal Bots use chat AIs to answer customized questions about the law or assist with specific legal questions. For example, the legal tech startup DoNotPay is an AI-powered chatbot that has helped over 160,000 people resolve their parking tickets. While many bots are being developed to help current/prospective clients of law firms, some bots are being developed for pro-bono firms, increasing legal access for those who have traditionally struggled to receive it (Marchant).

Legal Decision Making

Legal Decision Making uses algorithms to settle disputes in a court of law. These algorithms could play the role of lawyer, judge, and jury. While this type of AI is still relatively new, the relatively successful implementation of similar AIs in China’s judicial systems seems to hint at future implementation in the Western world as well (SCMP, Marchant).

The Legal Environment

The focal point of all these AI technologies is their power to analyze vast amounts of data. This means the intensive human labor that used to be necessary to perform basic legal functions could no longer be needed.

This has practical implications for four aspects of legal practice: the role of lawyers as legal practitioners, the need for lawyers by firms, the accessibility of legal services, and judicial practice.

The Role Of Lawyers As Legal Practitioners

As AI is increasingly implemented, lawyers will shift from research and data-gathering roles to advisory and application roles (Reuters, ABA).

The American Bar Association cites 4 human functions critical to the law that AI is currently unable to fulfill: judgment, empathy, creativity, and adaptability. For now, lawyers must use those four characteristics to interpret the results of legal AIs in order to deliver suitable legal solutions.

The Need For Lawyers By Firms

Given the decreased need for research and data-gathering roles, employees that perform research roles will likely be eliminated from legal practice. However, new jobs will be created. Namely, legal engineers will be needed to manage, develop, review, and write algorithms for AI (Reuters).

The Accessibility of Legal Services

As AI increasingly automates tasks, legal services will become more accessible to everyone. By giving more people the ability to pursue, or defend themselves from legal action, legal AI has the potential to promote justice in the legal system (ABA).

Judicial Practice

Lastly, legal AI has the potential to transform judicial practice. Though courtrooms have largely moved back to an in-person format as the COVID-19 pandemic wanes, legal AI could lead to virtual courtrooms becoming a norm. Automated dispute resolution by AI bots could remove the need for courtrooms almost altogether (ABA).

Can legal AIs adequately fulfill existing legal and ethical responsibilities in the practice of law?

Legal AI’s potential to alter legal practice is heavily scrutinized because of the ambiguity surrounding what regulations should be implemented to ensure legal AI is ethical. Without regulations and guidance to ensure legal AI is doing its job, legal AIs could pervert the course of justice and cause gross transgressions.

To understand what regulatory practices should be taken against legal AI, we have to ask: Can legal AIs adequately fulfill existing legal and ethical responsibilities in the practice of law?

The ambiguity and complexity of legal situations can call for a level of thought AIs may not be able to emulate. Even humans can struggle to choose fair and just solutions while reconciling difficult legal dilemmas.

Policymakers who are aware of such deficiencies in legal AI’s legal and ethical capabilities are better equipped to create regulations to circumvent unintended consequences. Answering this question helps policymakers make informed decisions on how we can make the inclusion of legal AIs in legal practice ethically, legally, and practically sound.

The American Bar Association defines 5 legal and ethical responsibilities lawyers (and by extension, legal AI) must fulfill.

The Duty of Competence

The duty of competence states: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation” (ABA).

One of the issues with current AIs is that their underlying algorithms contain implicit bias and lack transparency. AI’s implicit bias brings its competence as a legal aid into question: without a reliable algorithm, legal AIs could easily misconstrue the facts of a case.

A lack of transparency exacerbates this issue by making it hard for lawyers to determine whether an AI’s algorithm puts it at risk for significantly harmful prejudices or judgments.

On the other hand, the duty of competence here could very well fall on the lawyer — as the lawyer is responsible for choosing the legal AI, the lawyer is responsible for choosing competent legal aid. As more states adopt mandatory technology competence laws for lawyers, lawyers may become more cognizant of how to uphold ethical and legal standards. Either that, or they will have to access expertise on such opinions (ABA).

Legal Liability

Here, legal liability concerns who is liable for damages in the case of an AI mishap. The ABA cites two legal liability issues surrounding AI:

First, to what extent would lawyers be responsible for improper usage of an AI solution?

Second, who should be held responsible? The law firm? The defective software creator?

And when courts have to answer both these questions, will the court consider the lawyer’s competence in choosing a solution?

One answer is that AI providers will likely have some statute of liability contained in their contract. However, this still doesn’t solve the issue of who should be held responsible. Answering that question will require more ethical consideration on the part of AI companies and legal firms (ABA).

The Duty of Confidentiality

The duty of confidentiality states “A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client” (ABA). Lawyers utilizing legal AIs must be aware of the fact that AI retrieves large quantities of data from the cloud. Legal practitioners must be aware and diligent in understanding how and where data is stored in order to protect client confidentiality (ABA).

Luckily, federal laws account for some of the confidentiality issues by limiting data to stay within the physical boundaries of the US. In addition, privacy laws such as HIPAA and FERPA have set the grounds for legal data to be confidentially stored.

The Duty to Supervise

The duty to supervise describes the duty lawyers have to supervise subordinates, which includes third-party providers like legal AIs. Notably, the duty to supervise assumes that lawyers have fulfilled their duty of competence because it expects lawyers to competently choose and supervise legal AI vendors.

In this vein, the duty to supervise is also intertwined with issues of liability: when determining who is liable, did the law firm choose an appropriate AI, and did they appropriately supervise the use of the AI?

The Duty to Communicate

The duty to communicate describes lawyers’ duty to be able to fully explicate legal materials and anything associated with them to their clients. What this means is that lawyers must be able to fully sufficiently explain and understand how they chose, used, and supervised legal AIs to remain ethically and legally sound. This highlights a need for technological competence among the next generation of lawyers.

Regulation

Regulation by policymakers could allow government-based entities to account for these 5 ethical and legal issues by ensuring legal firms only use AIs suitable for the legal sphere.

To what degree should legal tasks be automated?

By addressing deficiencies, creating sufficient regulations, and most importantly, mandating human supervision, legal AIs may be able to adequately fulfill legal and ethical responsibilities in the practice of law.

One of the largest issues, however, is how much human supervision is needed. In other words: To what degree should legal tasks be automated?

A paper from Levy and Remus estimates that 13–23% of legal work can be automated for lawyers, and 69% of legal work can be automated for paralegals. This automated work can be grouped in the table below:

This table groups core tasks for legal practice by how much more effective automation would make them.

From Nussey

The strong employment category describes a task that can be almost fully automated, reducing time typically spent on the task by 85%

The moderate employment category describes a task that can be partly automated, reducing time typically spent on the task by 19%.

The light employment category describes tasks that could be minimally automated, or not at all. Time is reduced by 5%.

*Tier One firms have 1000> lawyers

** Tier Two firms have <25 lawyers

A bulk of the debate on how much work should be automated concerns disagreement over how capable legal AIs are. For example, the majority of people are confident in the capabilities of AI to successfully perform document reviews. More disagreement exists when it comes to whether or not AI can effectively legal write, advise clients, communicate, factually investigate, negotiate, and for fairly obvious reasons, make court appearances (Nussey).

In addition, AIs aren’t able to automate the emotive, creative, aspect that humans might bring to the table. For example, AIs recording affidavits or statements for a court of law might not be able to pick up on emotional or nonverbal cues lawyers use to get a sense of the information. However, not everybody agrees with this — McKinsey argues that though automating emotive tasks can be hard, it isn’t unviable. McKinsey also argues that the majority of workers spend minimal time on tasks requiring creative or emotive characteristics (Nussey).

The degree to which legal tasks ought to be automated is contingent on the information we don’t always have readily available — how effective are AIs at performing the tasks we ask them to do, and are the mechanisms underlying AI’s actions appropriate?

A study sponsored by Kauffman and Soares points out that the information we have on AI is incredibly limited in: (1) data; (2) algorithms; and (3) implementation. These first two issues are caused by a lack of readily available, analyzable, and high-quality data sets, as well as potentially biased research methods. Implementation models are also limited by a lack of study on how their output can be maximized: Implementation models require further study into “clearly defined use case and work process, strong technical expertise, extensive personnel and algorithm training, and well-executed change management processes.”

Until more information is introduced, there is no clear answer to how many legal tasks we ought to automate.

Final Thoughts

So, can “perplexing” legal cases be solved through logic and combinatorics? And can we do so ethically? The answer is yes, and then, maybe. The ethical use of AI in legal practice seems to be contingent on two main threads:

  1. AIs are responsible for fulfilling the same legal and ethical obligations lawyers have. However, lawyers are responsible for competently choosing AIs that are able to fulfill these obligations.
  2. In turn, regulatory organizations are responsible for studying, researching, and understanding the applications of legal AI. They must then choose how to regulate legal AIs in the public sphere to limit potentially harmful and easily avoidable consequences. All this information must be fully accessible to the public so they can make informed and ethical decisions.

Sylvia E is a philosophy and cognitive science student at the University of Illinois at Urbana-Champaign. She’s interested in law/public policy, AI ethics, and moral cognition. Feel free to connect with her at https://www.linkedin.com/in/esylvia/

--

--