The views expressed in this blog are those of the owner and do not reflect the views or opinions of the owner’s employer. All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information. This policy is subject to change at any time. The owner is not an attorney, and nothing posted on this site should be construed as legal advice. Litigation Support Tip of the Night does not provide confirmation that any e-discovery technique or conduct is compliant with legal, regulatory, contractual or ethical requirements.
New tips for paralegals and litigation support profesionals are posted to this site each night. Click on the blog headings for better detail.
Evidence of How Bad Manual Document Review Is
The Tip of the Night for December 6, 2018 discussed a study by Herbert Roitblat, Anne Kershaw, and Patrick Oot on to what degree the results of manual document review performed by three teams of individuals corresponded. See, Document Categorization in Legal Electronic Discovery:Computer Classification vs. Manual Review, 61(1) J. Assoc. Inf. Sci. Technol. 70–80 (2010). A law review article published by Maura Grossman and Gordon Cormack (the well-known authors of a glossary on TAR terms, as mentioned in the Tip of the Night for June 4, 2015) includes a table which does a good job of illustrating the disparity between the content of each review teams' results.
See, Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, 17 Rich. J.L. & Tech 11, 14 (2011). Available at: http://scholarship.richmond.edu/jolt/vol17/iss3/5 . The review set consisted of 5,000 documents. Team A's production and the original production only shared 16.3% of the total 'responsive' documents. Team B and the original production had a 15.8% overlap. Team A and Team B only had 28.1% of their relevant documents in common.
In the same article, the authors cite the results of a study by the researcher Ellen Voorhees of three teams of human document assessors on a set of more than 13,000 documents.
Id. at 13. Each team's results only corresponded by between 40-50%.
Good evidence to show that manual document review cannot be relied upon to locate all of the relevant documents in a data set.