Robot, Esq.? Four Reasons Lawyers Shouldn’t Fear AI and Automation Legal Tech

Christian Lang
10 min readFeb 18, 2017

--

(From Blacklines & Billables)

AI is all the rage. In the exhibit hall at ALM’s Legalweek (formerly Legaltech) 2017 conference, you couldn’t turn around without seeing a dozen companies touting “machine-learning” capabilities or peddling tools designed to streamline or automate the legal tasks of everyone from end consumers to law firm partners. Which isn’t surprising. As noted in Blacklines & Billables’ “look back” at #Legalweek17, one of the key thematic takeaways from the week for us was that legal technology — particularly A.I.-driven legal tech — has reached some critical functionality milestones — in some cases, very recently and rapidly — and is poised to transform the modern practice of law.

So is it all over for lawyers? Should we just give up our JDs and sign-up for coding classes?

Below, we discuss four reasons why we think lawyers and the firms they work for shouldn’t fear AI and automation technologies becoming a part of, and even changing, their practices.

Today’s AI: What It Is, What It’s Not

When most lawyers (who aren’t also technologists) hear “AI”, they think R2D2 or HAL (maybe Jarvis for the younger generation). So it’s worth taking a moment to clarify what we mean (and don’t mean) by AI.

AI is a label that simply refers to technology that appears to be intelligent from the behavior it exhibits. It’s nothing new. In its most basic form (involving coding sets of algorithmic rules to translate certain inputs into certain outputs), AI has been around for decades.

So if it has been around for decades, why DON’T we have C-3POs walking around law firms instead of paralegals? In short, it’s because no one has cracked the code on creating “general” or “strong” AI — in essence, artificial cognition with human-like intelligence, reasoning, and learning capabilitiesas opposed to limited or “weak” AI, which is simply the ability of machines to exhibit intelligent behavior in certain discrete ways. That ability is limited by a number of key factors, including (most simply) the constraints imposed by the rule-based algorithms employed by known AI approaches, the quality and size of the data set used to “train” the chosen algorithm(s), and the quality and quantity of the feedback that the system receives from humans to correct mistakes and improve future results.

Despite these limits, however, in the very recent past, cutting-edge AI has begun accomplishing complex and difficult tasks with increasing frequency. In other words, the field of play set by the limitations is expanding rapidly.

In the words of Andrew McAfee in his excellent #Legalweek17 keynote, “Trends, Technology, and Talent in the Second Machine Age” (building on his bestselling book, the “Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies” (affiliate link)), the recent scale, pace, and scope of tech progress is surprising almost everyone involved in the field, and he pointed to three key drivers of that progress and acceleration:

  • the exponential increase of, and access to, computing power (i.e., a generalization/extrapolation of Moore’s Law);
  • the oceans of data being created and captured in today’s increasingly digital, integrated, and sensor-saturated world; and
  • the democratization of technological education and expertise, as well as access to crowd-based learning.

AI and the Law

As you can probably glean from the above, there’s no good reason to believe that the lawyers of tomorrow or next year will be churned out of labs, as opposed to law schools. But there’s also no good reason to underestimate the power of the coming AI wave.

Many lawyers — particularly, in my experience, good ones at top firms who think their skill sets are exceptional — are quick to dismiss the potential of AI and automation to affect their practices (in any meaningful way). That’s a mistake.

For certain types of tasks, such as categorizing information (think: discovery or due diligence, for legal use-cases), AI has matured to a point of incredible power and effectiveness, and for more complex tasks, including some of those widely seen as bastions of professional human judgment, certain cutting-edge AI techniques are matching, or even outstripping, human capabilities in targeted ways.

For example, using neural networks and “deep learning” AI techniques, a team at Stanford has leveraged image-recognition technology to create an app that analyzes pictures of moles and skin lesions to diagnose skin cancer as accurately as a board-certified dermatologist. And in March of last year, Google DeepMind’s AlphaGo bested 18-time Go world champion Lee Sedol in a 5-game match (4 games to 1). That’s significant, as described by Andrew McAfee in his keynote, because the world’s best Go players can’t really articulate their winning strategy. The right moves simply emerge for them from some combination of intuition and creativity. (An illustration of Polanyi’s Paradox: “We know more than we can tell.”) And, for that reason, it was assumed up until last March that, unlike checkers or even chess, today’s AI wasn’t close to beating humans at Go. Because how can a computer program beat a human being at a game where the rights moves are inarticulable and therefore can’t be programmed into an algorithm? That theory was borne out by the fact that pre-AlphaGo computer programs had only ever achieved an amateur level of play. But the DeepMind team used a combination of AI approaches to allow the computer to train itself and program its own rules for right moves based on its own learning and analysis. And the result was to shatter the assumed Polanyi’s Paradox ceiling on AI’s capacity to exhibit strategic thinking. (And lest you think that AlphaGo’s victory was not really paradigm altering from earlier, imitation-based approaches to AI gaming, in game two of the five-game match, AlphaGo committed strategic heresy by making a move that would would have drawn the ridicule of Go professionals to a human player. AlphaGo went on to win the game.)

But even in the face of these AI achievements, we think lawyers shouldn’t fear AI and other technologies that automate legal workstreams.

Four Reasons Not to Fear AI and Automation

Reason 1: The Tech Works

No one becomes a lawyer to push papers, format documents, and spend hours sifting through mountains of irrelevant material simply to identify the documents requiring further attention. Yet a huge swath of the modern lawyer’s time is spent on such administrative headaches, project management, digging through email folders or document-management systems, and doing other tasks that are ripe for digital automation.

Imagine getting to spend nearly your entire day doing original thinking, conducting bespoke legal analysis, and focusing on the key idiosyncrasies of your client’s unique position to craft solutions to the problems they face. Imagine getting to be the lawyer that you dreamed of being in law school. There are no silver tech bullets out there in the current market, but there are a growing number of tools that will streamline or eliminate the rote, repetitive tasks that eat up so much of your day and allow you to focus on what you deem most important.

It’s not just about efficiency. When asked what’s market practice in a particular deal or what’s the likelihood of success in a particular case, imagine being able to rely on data to guide and support your intuition. Imagine the liability-management benefits of not having to rely exclusively on an exhausted first-year associate to spot the important clause in document 1,206 of 5,003 in the VDR at 3:00 in the morning. Imagine having all the best training resources and precedent materials at your fingertips for any issue the moment it arises, so that you’re spending your time doing original thinking and real legal analysis, as opposed to digging through your firm’s resources.

In sum, the tech will make us better lawyers and lead to better results for our clients.

Reason 2: The Human Advantage

Ceding potentially billable ground to computers is scary. But it’s not a completely slippery slope. There are many areas, based on how modern AI works, where the technology simply can’t help or where humans are really just better.

For example, for high-level analytical tasks, such as those discussed in the examples above, AI algorithms needs a huge amount of “training data” to work effectively. The cancer-screening app, for example, required massive repositories of expertly categorized images, and AlphaGo needed millions of examples of prior Go games in order to generate a successful “strategy”. And while there are certainly some legal tasks that are straightforward enough or for which sufficiently high volumes of training data exist that AI can facilitate or replace what is currently being done by lawyers, there are innumerable areas where the analysis required is bespoke and idiosyncratic enough that human judgment reigns supreme.

Moreover, the practice of law isn’t all about analysis and a tangible work product. In fact, I’d argue that some the most important parts of being a great lawyer have to do with your EQ, not IQ. It’s about listening to and understanding what’s really going on in a particular matter. It’s about persuading someone else — your client, the other side, your team — to adopt a position or particular view. And the second we start talking about emotional intelligence, persuasion, and communication, computers are entirely out of their depth.

Reason 3: The Pressure is Not All Bad

Yes, automating tasks currently done by humans means that some legal work is going away. But that’s not necessarily a bad thing.

For example, given modern fee pressures, many firms are writing off or negotiating away meaningful chunks of time, and that “extra” time is often attributable to the admin work and rote tasks that can be delegated to technology, which would permit firms to reconcile their staffing and fiscal priorities to best respond to client fee pressure. Additionally, work that is currently being lost to in-house legal departments and alternative legal providers through the unbundling of corporate legal services could be retained or recaptured if firms can get it done more cost effectively.

What’s more, in sourcing the more rote and commoditizable tasks from computers, lawyers will be increasingly pushed to spend more time focusing on those high-value activities that computers can’t do, which is likely not only to be more rewarding for the lawyers, but it will almost certainly lead to better results for clients. Instead of word processors and project managers, lawyers can focus on being counselors and trusted advisors who spend their time doing greenfield thinking about clients’ unique issues, rather than wasting time reinventing wheels.

In short, the AI revolution will put pressure on firms to put its human capital to its best and highest use. And we think that’s a good thing for all involved.

Reason 4: The Democratization of Access to Legal Services

Finally, there are important conversations happening around the world about the morality of innovation and automation. (How do we deal with the resulting job loss? Who is being left behind in the new economy? How do we protect and rejuvenate a middle class to protect against the creation of two economies — one for highly skilled professionals and business owners and the other for everyone else in a race-to-the-bottom service economy?)

Though most lawyers occupy a relatively privileged position in the grand scheme of technological revolution, even in the law, there are many who worry about the jobs and services that will be displaced by AI and automation technologies. And while our industry is certainly ripe for disruption, I take a relatively optimistic view of what a tech-enabled future means for the world of legal services.

Unlike industries where markets are saturated and fully served (such as the freight and trucking industry, where automation will almost certainly result in devastating job losses), I believe the market for legal services is much bigger than that which is currently being served. There are millions of people every day who need, but don’t get, legal advice. It’s too expensive. It’s hard to access. Yet, with technological advancements that will both permit people seeking legal guidance to access it in non-traditional ways (think: LegalZoom and its peers) and allow more traditional providers to offer services at lower costs and higher volumes with the assistance of technology, many more people who would benefit from legal advice are likely to get it. In other words, legal tech advancements may well lead to a great democratization of access to legal services and therefore greater access to legal and economic justice.

True, the base of the pyramid at large law firms may narrow, as tasks previously accomplished by associate brow-sweat are now accomplished with the click of a button; and there may be a flight to quality at the top of the market that excludes some current players; but with so many potential consumers for additional or better legal services out there, I have faith that market forces will help create plenty of new opportunities for lawyers to offset the lost ones. And it’s also worth noting that even the most advanced AI still requires a substantial amount of expert human engagement and involvement, so it’s simply not the case that automation leads to lost jobs on a one-for-one basis. (See Matt Turck’s interesting recent blog post on the subject, “Debunking the ‘No Human’ Myth in AI” for a related discussion.)

(First published at Blacklines & Billables.)

--

--

Christian Lang

Recovering corporate lawyer, legal tech enthusiast, founder of the NY Legal Tech Meetup, Blacklines & Billables, Dealtech, and The Firm Formula