Evidence of How Bad Manual Document Review Is
top of page

Evidence of How Bad Manual Document Review Is

The Tip of the Night for December 6, 2018 discussed a study by Herbert Roitblat, Anne Kershaw, and Patrick Oot on to what degree the results of manual document review performed by three teams of individuals corresponded. See, Document Categorization in Legal Electronic Discovery:Computer Classification vs. Manual Review, 61(1) J. Assoc. Inf. Sci. Technol. 70–80 (2010). A law review article published by Maura Grossman and Gordon Cormack (the well-known authors of a glossary on TAR terms, as mentioned in the Tip of the Night for June 4, 2015) includes a table which does a good job of illustrating the disparity between the content of each review teams' results.



See, Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, 17 Rich. J.L. & Tech 11, 14 (2011). Available at: http://scholarship.richmond.edu/jolt/vol17/iss3/5 . The review set consisted of 5,000 documents. Team A's production and the original production only shared 16.3% of the total 'responsive' documents. Team B and the original production had a 15.8% overlap. Team A and Team B only had 28.1% of their relevant documents in common.



In the same article, the authors cite the results of a study by the researcher Ellen Voorhees of three teams of human document assessors on a set of more than 13,000 documents.



Id. at 13. Each team's results only corresponded by between 40-50%.


Good evidence to show that manual document review cannot be relied upon to locate all of the relevant documents in a data set.


bottom of page