top of page

The EDRM's is currently focusing on developing best practices for Technology Assisted Review. This past September the Duke Law Center for Judicial Studies held a conference on Technology Assisted Review. (The EDRM is part of the center.) 15 judges and more than 75 ediscovery experts, including Tony Scott, the former Chief Information Officer of the federal government, participated in the conference. The discussion at the conference centered around the examination of ediscovery workflows, and reviewed the differences between TAR and other ediscovery review methods. The conference uses the EDRM's Computer Assisted Review model as a starting point on which to get an overview of the TAR of computer assisted review process.

As this graphic shows the goals for TAR will vary from case to case, and a fact specific protocol must be followed, that must be learned by the actual document reviewers.

A statistical sampling should be used to test the accuracy of the results.

At a meeting I attended today with Jack Knight and John K. Rabiej, the co-director and director of the Center, they noted that some electronic discovery service providers may balk at having to limit their approach to Technology Assisted Review according to specific guidelines.

Be sure to look out this year for the publication of EDRM's TAR best practices and protocol. Mike Quartararo (the author of Project Management in Electronic Disocvery), of Stroock & Stroock & Lavan LLP and Adam Strayer, of BDO Consulting are leading the effort.



On November 27, 2017, in Winfield v. City of New York, 2017 U.S. Dist. LEXIS 194413 (S.D.N.Y.) Magistrate Judge Katharine Parker of the Southern District of New York ruled that the defendants did not have to disclose information on the TAR system they used for document review, while granting the plaintiffs' request to sample non-responsive documents. The court did review information related to the City’s document ranking system in camera but would not disclose this information to the plaintiffs.

The Court ordered the City to perform a TAR review on the rest of its document set after completing its linear review (run with search terms the parties agreed to only with the help of the Court) on ESI collected from certain custodians. The Court judged that TAR was necessary to hasten the identification, review and production of responsive documents. The City had complained that it cost more than $350,000 to review 100,000 documents using linear review.

In order to support its position that the City's TAR system did not correctly identify responsive documents, the plaintiffs pointed to documents the City produced inadvertently, or produced with redactions which were in fact responsive. "Among these were two electronic documents for which the City only produced a 'slip sheet' because the documents had been designated as non-responsive (the 'slip-sheeted' documents), but where Plaintiffs were nevertheless able to view the 'extracted text' of the documents due to a production error." (Id. at *16). The court ordered briefing and depositions on the defendants' privilege designations, and also requested a log of a sample of 80 privileged documents. After the review of the sample, the City subsequently changed its designation of 36 documents from privileged to responsive.

Nevertheless, the Court found no evidence of gross negligence or unreasonableness in the City's use of its TAR system. It ordered that the City could use this system to run a search using 665 search terms proposed by the plaintiffs. The court stated that a meet and confer and motion for compel would be needed if the City's subsequent review found responsive documents related to primary claims and defenses in a subject area covered by a previous production phase.

The plaintiffs contended that the City over designated documents as non-responsive in the training set used for its TAR system. In holding that the producing party was in the best position to evaluate its methods for producing responsive documents, the Court cited Judge Peck's decision in Hyles v. New York City, and Sedona Principle 6. Judge Parker also noted that attorneys are obligated to comply with FRCP 26 in certifying that a reasonable inquiry was performed when making document productions; the danger of revealing trial strategy and work product; and that perfection is not required in the production of responsive ESI. Judge Parker stated that:

". . . this Court is of the view that there is nothing so exceptional about ESI production that should cause courts to insert themselves as super-managers of the parties' internal review processes, including training of TAR software, or to permit discovery about such process, in the absence of evidence of good cause such as a showing of gross negligence in the review and production process, the failure to produce relevant specific documents known to exist or that are likely to exist, or other malfeasance." [Id. at *29-30]

The Court's in camera review of the TAR system showed that the seed set included more than 7200 documents that were randomly selected and 'pre-coded example' documents. There were five training rounds and a validation process, and the review team received extensive training.

In granting the request for a sample set, the Court cited the need for transparency, the low responsiveness rate in a high volume of ESI, and specific errors in coding identified by the plaintiffs. The sample set is to consist of 400 documents - 300 from one category, and 100 from another.

While the Court did not order disclosure of information on the TAR ranking system, it did encourage the City to share this information with the plaintiffs.



The Federal Judicial Center is the official research organization for the United States federal courts. Its governing body is chaired by the Chief Justice of the Supreme Court of the United States, and a committee on which seven federal judges serve.

In 2017, the FJC published a pocket guide to TAR for federal judges entitled, Technology-Assisted Review for Discovery Requests.

In addressing whether or not TAR constitutes 'a reasonable inquiry' under FRCP 26(g), the Guide states that TAR will perform well in situations where the following conditions exist:

1. The number of documents in a collection is large

2. Responsive documents are expected to be similar to each other in some fashion. 3. The TAR algorithm measures that similarity.

The guide also notes that TAR is not always better than keyword searching in identifying responsive documents, and that the algorithms used by various vendors often differ greatly. The FJC bemoans the fact that there, "There is, to our knowledge, no published manual that evaluates the algorithms of each vendor and explains the requirements of each algorithm for proper seed set construction."

Disputes over the efficacy of TAR will be less likely when the following steps are taken:

1. Proper Seed Set Construction - the parties should be transparent about the process and consider the production of the seed set, or at least which documents were part of the seed set.

2. Statistical Validation - the appropriate threshold rates for recall and precision depend both on the circumstances of each case, and the methods used to calculate these rates.

The guide also provides a draft order entitled, "Order on Using Technology-Assisted Review in Discovery of Electronically Stored Information in Civil Cases".

The Tip of the Night for November 20, 2015 addressed the FJC pocket guide on ESI Discovery.


Sean O'Shea has more than 20 years of experience in the litigation support field with major law firms in New York and San Francisco.   He is an ACEDS Certified eDiscovery Specialist and a Relativity Certified Administrator.

The views expressed in this blog are those of the owner and do not reflect the views or opinions of the owner’s employer.

If you have a question or comment about this blog, please make a submission using the form to the right. 

Your details were sent successfully!

© 2015 by Sean O'Shea . Proudly created with Wix.com

bottom of page