E-Discovery Research Stirs Controversy

E-Discovery Research Stirs Controversy
March 2, 2013
By: Alice M. Porch
Managing Editor, Tampa Bay
Research shows that electronic discovery (e-discovery) using “predictive coding” software produces accurate results and saves clients thousands of dollars. In 2012, judges approved the use of predictive coding as an acceptable method of searching through electronically stored information (ESI) for cases that have a large number of documents to review.
 
E-discovery involves exhaustive manual document reviews by experienced attorneys. The process can be extremely expensive and time consuming because some cases have millions of documents. This is an unavoidable expense since parties must produce all of the required documents for their cases, and mismanagement of ESI can cause evidence spoliation.
 
Predictive coding uses algorithms to train a computer to review and code documents based on instructions from a human reviewer. The reviewer codes a small set of documents, and the computer identifies all of the properties. As more samples are added, the computer learns from the reviewer’s decisions and applies what it learns to new documents. The reviewer gives the computer feedback of its accuracy, and eventually, the computer learns enough to make accurate predictions for remaining documents.
 

Webber used two assessors in his e-discovery experiment whom were not employed in the legal profession. Using the software, these assessors were able to achieve reliable results that are competitive with professional document reviewers.

 
Researchers are conducting studies to measure how technology-assisted review (TAR) compares to manual document review. The members of Legal Track evaluate document review methods at the Text Retrieval Conference (TREC), an annual government-sponsored project. Its 2011 results gave TAR “a virtual vote of confidence” and concluded that its efforts “require human review of only a fraction of the entire collection, with the consequence that they are far more cost-effective than manual review.”
 
William Webber, a researcher and PhD candidate at the University of Melbourne, evaluated the accuracy of predictive coding software. Webber used two assessors in his e-discovery experiment whom were not employed in the legal profession. Using the software, these assessors were able to achieve reliable results that are competitive with professional document reviewers.
 
So, who were the assessors in Webber’s experiment? Surprisingly, the assessors were high school seniors working in Webber’s e-discovery lab as interns who had no legal training and no prior e-discovery experience.
 

Webber maintains a research blog where he discussed his recent experiment. The results were disturbing to some of Webber’s blog readers. An angry commenter wrote, “You suggest that Document Reviewers whom go to law school, pass the bar, are basically producing the same level of work as a bunch of High Schoolers?” Another defensive commenter wrote, “Some reviewers are bound to make mistakes doing 10-12 hours of routine clicking. The simple[r] the project the more likely that said high school students might do better.”

Webber explained to the disgruntled blog readers that the purpose of his study was to provide valid academic research.

Scientific studies are important to law firm managers because the results help them objectively evaluate the benefits of investing in e-discovery software. Importantly, research results serve as a guide to the best way to integrate new technologies into a law practice to be competitive with other firms. As a result, predictive coding will change the future business practices of the legal industry, which will continue to be under the scrutiny of academic scientists.

Comments