Uber Tragedy Raises Questions Around ‘Driverless’ Legal AI

Jacob Heller
Casetext Blog
Published in
3 min readApr 23, 2018

It was the car accident heard round the world. A couple of weeks ago in Tempe, Arizona, an autonomous Uber vehicle struck and killed a 49-year-old woman, Elaine Herzberg. We’ve all heard about it. And as the head of a startup that is focused on AI, I’ve done some thinking about it.

Not, of course, because Casetext presents any physical danger to anyone. We’re not like automakers in that respect. But we share this in common with Uber and other companies working on autonomous vehicles: we make an artificially intelligent product that our users (rightly) depend on to perform well. And while mishaps in legal research don’t lead directly to physical injury, they can make a difference in civil and criminal cases with huge stakes.

With that in mind, I couldn’t help but to consider whether the Uber tragedy holds any takeaways for Casetext and other AI-based companies in the legal space. Three areas spring to mind.

First, we are simply not ready (and may never be) for the “driverless” AI experience in the legal profession. None of the AI-based tools that have come on the market even attempt to replace lawyers; instead, they facilitate their work. From legal research to contract review to due diligence, AI applications are intended to make lawyers’ work more efficient, and provide them with insights they may not have discovered on their own. But they do not substitute for the lawyer the way a driverless car substitutes for the driver. In the law, at least, there is still nothing that can replace human judgment.

Second, it’s helpful to remember that accidents will occur even when humans are at the proverbial wheel. That was literally the case in the Arizona accident, where Uber’s autonomous car had a “safety driver” inside it. Unfortunately, we are imperfect, just as the machines that we make are. On the plus side, in driving, practicing law and other endeavors, humans and machines can act as a check on each other, greatly improving performances and reducing errors that would occur without input from both.

Third and finally, although news of the Uber crash was a shock, we should consider the appropriate reaction when AI systems are involved in accidents. In the case of Uber, there is a very understandable possibility that the accident will sour legislators and turn public sentiment against autonomous vehicles. At the same time, the Tempe chief of police has said that Uber “likely would not be at fault in this accident,” given how little time the car had to react to the pedestrian emerging onto the road. The wisest course is to soberly assess whether AI is improving performance in the discipline in question — whether it be driving or something else — and making errors less common.

I’m confident that in the law, we’re on the right track, even though we’ll never get rid of error entirely. We’re only human, after all.

--

--