Algorithms fall short in predicting litigation outcomes

Jason Tashea
Sep 1, 2018 · 1 min read

There is a small law firm in Annapolis, Maryland, harnessing big data.

Emanwel Turnbull, one of two attorneys at the Holland Law Firm, uses a unique, statewide database to gain early insight into his cases.

The database, which has over 23 million civil and criminal cases from Case Search, the state judiciary’s document portal, allows Turnbull to analyze the behavior of an opponent, check a process server’s history or learn whether an opposing attorney has an unscrupulous track record.

“Back in the old days, they’d have an intern or secretary manually go through judiciary Case Search to try and find these things out,” says Turnbull.

Now, it takes seconds.

While saving time, the database does not provide dispositive evidence because the information reflects input and clerical errors. Even with this shortcoming, Turnbull says the database points him “in the direction of further research,” which he uses in aid of his clients.

Turnbull’s work reflects data’s growing role in law. With increased computing power and more material, law firms and companies are evolving the practice and business of litigation. However, experts say these projects can be hindered by the quality of data and lack of oversight.

Many data-driven projects promise efficiency and lower legal costs for firms and clients.

Littler Mendelson developed CaseSmart, launched in 2010, “to provide better value” to clients with leaner legal budgets after the recession. The project is a repository for data created by a client’s legal issues, explains Scott Forman, a shareholder in Miami.

Continue reading at www.abajournal.com.

Jason Tashea

Written by

Tech, data, & the legal system. Founder @JusticeCodes, Staff Writer @ABAJournal, & Adjunct Prof @GeorgetownLaw.