TAR for Smart People Outline - Chapters 11 and 12
top of page

TAR for Smart People Outline - Chapters 11 and 12


Here's another installment in my outline of John Tredennick's 'TAR for Smart People'. I last posted an installment on January 27, 2017. This night's installment is on Chapter 11, Case Study: Using TAR to Find Hot Docs for Depositions: How Insight Predict Succeeded Where Keywords Failed, and Chapter 12, Case Study: Using TAR to Expedite Multi-Language Review.

11. Case Study: Using TAR to Find Hot Docs for Depositions: How Insight Predict Succeeded Where Keywords Failed

A. Challenge: Quickly Find Hot Docs in Garbled Production

1. Multi-district Litigation involving medical device. 77K electronic documents produced shortly before depositions. No meta data, and poor scanning leads to bad OCR. Results of focused searches consist of only 5% being possible deposition exhibits, and 46% relevant.

B. Solution: Using Insight Predict to Prioritize Hot Documents for Review

1. Lead attorney QC'd documents already tagged as hot, and few hundred more targeted hits and samples.

2. Using those seeds ranked the entire document set for hot documents.

3. Top 1000 documents then pulled for attorneys to evaluate.

4. TAR increases docs being potential exhibits to 27% and relevant docs to 65%.

C. Good Results from Difficult Data

1. Insight Predict allows you to use an unlimited number of seeds from judgmental sampling.

12. Case Study: Using TAR to Expedite Multi-Language Review: How Insight Predict's Unique Capabilities Cut Review by Two-Thirds

A. The Challenge: Quickly Review Multi-Language Documents

Shareholder class action regarding violations of securities laws requires review of mixed set of Spanish and English documents. Review 66K files for responsive documents to produce.

B. The Solution: Prioritize Documents for Review Using Insight Predict

1. Few hundred emails already reviewed by attorneys that came from key custodians used as seeds to train Predict engine.

2. Separate rankings for the two languages. On demand batches sent to review team from top of the ranking round.

3. After training predict with initial seeds responsiveness of batches sent to review team increased from 10% to 40%.

4. 91% recall achieved after reviewing only 1/3 of documents.

C. Uncovering a Hidden Trove

1. Contextual Diversity Sampling - Predict looks for biggest set of documents that have no reviewer judgments, finds best examples in them, and send to review team.

2. In case study contextual diversity sampling found hundreds of spreadsheets omitted from initial manual seeds.

3. Every responsive document sorted to the top third of the review set, making review of two-thirds of set unnecessary.


bottom of page