top of page
  • Oct 23, 2017

For a while now, law firms have been slow to adopt technology assisted review despite studies proving its effectiveness and approval from the courts. In the August 20, 2015 Tip of the Night I noted that in a webinar hosted by Exterro, Chief Judge Joy Conti of the United District Court for the Western District of Pennsylvania noted that, despite the developments in Rio Tinto v. Vale encouraging transparency between parties with respect to predictive coding seed sets, she has not seen many predictive coding cases in her court.

Last night, I took the Electronic Discovery Institute's course on Search and Review of Electronically Stored Information. Laura Grossman, a well known expert on technology assisted review stated during the course that, "Manual review and technology assisted review are being held to different standards. One would never have gotten a request to show the other side your coding manual, or have them come visit your review center, and lean over the shoulders of your contract reviewers to see whether they were marking documents properly - responsive - not responsive. I don't remember ever being asked for metrics - precision / recall of a manual production - but I think people are scared of the technology. There are many myths if three documents are coded incorrectly that the technology will go awry and be biased or miss all of the important smoking guns. But humans miss smoking guns also. And, so, I do think it's being held to a different standard. I think that creates disincentives to use the technology because if you're going to get off scot-free with manual review and you're going to have to have hundreds of meet and confers and provide metrics and be extremely transparent and collaborative if you're using the other technology, for many clients it becomes very unattractive and their feeling is it's just easier to use the less efficient process."

Judge Conti's observations and Ms. Grossman's theorizing are borne out by a study conducted by Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey of the University of Pennsylvania, Algorithm Aversion: People Erroneously Avoid AlgorithmsAfter Seeing Them Err , Journal of Experimental Psychology (2014). The synopsis of the article states that:

"Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion,is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. "

The paper doesn't specifically address technology assisted review and the process of identifying responsive documents. It refers to use of algorithms in the college admissions process and in predicting the number of passengers on American airlines, and got results from a controlled study in which volunteer participants would have received a bonus if they used a statistical model rather than forecasts made by humans.

Dietvorst mentions the inability of algorithms to learn from experience; their inability to evaluate individual targets; and ethical qualms about having automated processes make important decisions as reasons for humans; aversion to adopting their use.

As the below graphs show, participants in the study were less likely to chose an algorithm not only when the saw the results of its forecasts, but also when they were able to compare its results against decisions made by humans. Seeing humans make mistakes when they know the statistical model could as well did not significantly encourage them to adopt the algorithm.



One of the first tips posted to this site, back on April 29, 2015, was that a 'HSR Second Request' under the Hart–Scott–Rodino Antitrust Improvements Act provides a good opportunity to convince a party to use technology assisted review. The Second Request is a,"Request for Additional Information and Documentary Materials" made by the FTC when it believes that a merger may interfere with competition in a particular market.

Kroll and an attorney at Wachtell Lipton LLP collaborated on a December 2016 presentation on Second Requests, entitled, "The Nuts and Bolts of Second Request Compliance", which is available on the site of the New York State Bar Association. It details the way in which technology assist review can expedite the production of a large number of documents which must take place before a waiting period of 10-30 days will begin before a transaction will be permitted to close.

Second Requests may on average delay the completion of mergers by as long as three months. Substantial compliance is required before the second waiting period can begin and is usually 'extremely burdensome' and can cost 'several million dollars'.

The DOJ and FTC have set guidelines for the discovery process allowing for between 30-35 custodians; recommending 4-6 months of pretrial discovery if the parties end up litigating; and specifying that the discovery should cover a time range of two years prior to and up to 30-45 days before the request. The FTC may require a full privilege log for 5 custodians prior to compliance being acknowledged.

The DOJ's Second Request Model requires that parties provide a written description of search terms or a predictive coding method. It states that:

" For any process that instead relies on predictive coding to identify or eliminate documents, you must include (a)confirmation that subject-matter experts will be reviewing the seed set and training rounds; (b) recall, precision, and confidence-level statistics (or an equivalent); and (c) a validation process that allows for Department review of statistically-significant samples of documents."

The presentation encourages the parties to collect documents before a Second Request is made. Notably it quotes a publication of the DOJ on TAR that specifically endorses its use for Second Requests, "Predictive coding is preferred because the judgments about responsiveness during manual review are less accurate and almost certainly are not consistent among reviewers." See “Technology Assisted Review and other Discovery Initiatives at the Antitrust Division”.



In FCA US, LLC v. Cummins, No. 16-12883 (E.D. Mich. Mar. 28, 2017), Judge Avern Cohn ruled that performing technology assisted review to a full data set, before culling it down by running keyword searches, is to be preferred to culling by keyword searches before doing TAR. The court's brief order cites only one source other than the parties submissions, the Sedona TAR Case Law Primer. Back on October 11, 2016, the Tip of the Night summarized the Sedona Conference's guide, and noted that, "The Northern District of Indiana denied a motion to make a party redo its review with TAR after it had already performed keyword searches, and let it proceed by applying TAR to a set culled down through the use of the keyword searches, but the District of Nevada stated that it was not a best practice to use TAR on documents found with the use of traditional search terms." In Section V, "Disputed Issues Regarding TAR", subsection C, is entitled,"Using Search Term Culling Before TAR". Here the following pro-TAR before keyword searching points are made:

1. N.D. Ind. - In re Biomet M2A Magnum Hip Implant Products Liability Litigation, - predictive coding might find documents a keyword search would not.

2. D. Nev. - Progressive Casualty Ins. Co. v. Delaney - applying TAR only to documents hitting keyword searches, was contrary to the best practices guide of a software vendor.

3. S.D.N.Y. - Rio Tinto v. Vale - Judge Peck said that keyword culling before running TAR would not occur in a perfect world.

The court noted that the parties should have been able to reach an agreement on how to do culling between themselves without involving the court.


Sean O'Shea has more than 20 years of experience in the litigation support field with major law firms in New York and San Francisco.   He is an ACEDS Certified eDiscovery Specialist and a Relativity Certified Administrator.

The views expressed in this blog are those of the owner and do not reflect the views or opinions of the owner’s employer.

If you have a question or comment about this blog, please make a submission using the form to the right. 

Your details were sent successfully!

© 2015 by Sean O'Shea . Proudly created with Wix.com

bottom of page