Technology

Algorithms fall short in predicting litigation outcomes

  •  
  •  
  •  
  •  
  • Print.

data flowing

Amiak/Shutterstock.com

There is a small law firm in Annapolis, Maryland, harnessing big data.

Emanwel Turnbull, one of two attorneys at the Holland Law Firm, uses a unique, statewide database to gain early insight into his cases.

The database, which has over 23 million civil and criminal cases from Case Search, the state judiciary’s document portal, allows Turnbull to analyze the behavior of an opponent, check a process server’s history or learn whether an opposing attorney has an unscrupulous track record.

“Back in the old days, they’d have an intern or secretary manually go through judiciary Case Search to try and find these things out,” says Turnbull.

Now, it takes seconds.

While saving time, the database does not provide dispositive evidence because the information reflects input and clerical errors. Even with this shortcoming, Turnbull says the database points him “in the direction of further research,” which he uses in aid of his clients.

Turnbull’s work reflects data’s growing role in law. With increased computing power and more material, law firms and companies are evolving the practice and business of litigation. However, experts say these projects can be hindered by the quality of data and lack of oversight.

Many data-driven projects promise efficiency and lower legal costs for firms and clients.

Littler Mendelson developed CaseSmart, launched in 2010, “to provide better value” to clients with leaner legal budgets after the recession. The project is a repository for data created by a client’s legal issues, explains Scott Forman, a shareholder in Miami.

Emanwel Turnbull

Emanwel Turnbull: “Back in the old days, they’d have an intern or secretary manually go through judiciary Case Search to try and find these things out.”

Through an intake team in Kansas City, Missouri, employment law firm Littler can capture data that, on aggregate, points to specific company policies or employees that create legal problems. Clients can take these outputs and decide to increase training or change a policy to reduce the risk of future litigation, for example.

While the data is good for trendspotting, it is not used to build predictive algorithms. Forman wants to include that capacity at some point; however, he says there is not enough “clean data” to train an algorithm.

Algorithms are built on structured data, which means data quality impacts accuracy. If your data is of low quality, then it’s as the saying goes: “garbage in, garbage out.”

Travis Lenkner, managing partner at Keller Lenkner in Chicago and senior adviser at Burford Capital, a litigation finance firm, says the use of algorithms and data in litigation finance “will be measured and limited by the availability of data.”

While Burford’s work is informed by data, Lenkner is skeptical that current court data quality allows for good predictions. “Imagine your county courthouse,” he says, and think about its data situation. You realize “there’s a lot of work to do.”

One way to put a check on data and algorithmic systems is through auditing, explains Christian Sandvig, a professor at the University of Michigan.

Like a financial audit, this process can determine, among other things, whether there are data quality issues, an algorithm meets legal standards, or the tool is creating unintended or biased consequences. Sandvig says that audits are not “just investigative or punitive,” but they should be used by companies to monitor their own operations. Currently there are no industry standards regarding what or who should be audited and how, he says.

Call an auditor

Whether or not this call for oversight is heeded, companies in the legal market see growing demand for better analytics. Justly, a litigation analytics startup founded in New York City in 2015, is one.

CEO Laurent Wiesel explains that Justly aggregates litigation data to “forecast time and cost” in business cases. Subscriptions start at $1,500 a month. Beyond forecasting, Justly provides reports on clients and op-posing counsels. The predictions are derived from a data set of 17.5 million state and federal cases going back to 2003.

Informed by his work as an attorney, Wiesel says Justly improves on the traditional, “labor intensive” law firm approach to estimate the needs of litigation. When asked, he says that Justly has not undertaken third-party auditing, but that he was open to the idea.

The co-founders of Legalist take a different approach with their data, which is used to inform litigation financing.

Eva Shang, CEO and co-founder of Legalist, says her company’s data allows it to look “at the likelihood” a case will win at trial. The data-driven infusion, Shang says, helps close the justice gap for small businesses.

With a data set of 15 million state and federal cases, Shang says they have not engaged with auditors. She says investors do their due diligence when deciding to invest, which includes the fact that they have not suffered a loss yet.

Since no one would ask a large insurance company to open its underwriting models, she argues, the auditing issue was irrelevant to Legalist.

While aware of startups in his field such as Legalist, Lenkner, who maintains a senior adviser role at Burford, is comfortable with the company’s data and algorithm use. He has an appetite for more robust analytics, he says, but only when the data is more trustworthy.

“We are stewards of our investors’ and our shareholders’ capital, and they invest with us precisely because of our expertise and judgment and experience, not in spite of it,” Lenkner says.


This article was published in the September 2018 ABA Journal magazine with the title "Good Data, Bad Data: When it comes to predicting outcomes in litigation, algorithms are only as good as their underlying information."

Give us feedback, share a story tip or update, or report an error.