TAR for Smart People - Outline - Chapter 8
top of page

TAR for Smart People - Outline - Chapter 8


Here's another installment in my outline of John Tredennick's 'TAR for Smart People'. I last posted an installment on November 23, 2016. This night's installment is on Chapter 8 - Subject Matter Experts.

8. Subject Matter Experts - What Role Should They Play in TAR 2.0 Training?

- Senior attorneys don't want to spend time reviewing irrelevant documents. Don't want to have to wait for the expert to have time to review seed documents for new uploads.

A. Research Population

Conducted study on how important SMEs were to the review process using review by TREC (Text REtrieval Conference) of Enron data, against decisions by topic authorities.

B. Methodology

Random selection of documents for training included documents on which SMEs and reviewers agreed and disagreed.

C. Experts vs. Review Teams: Which Produced the Better Ranking?

Assumed that the topic authorities made the correct decision. Did not independently evaluate.

D. Using the Experts to QC Reviewer Judgments

- Third set of training documents - SME correct the reviewer decisions.

- Prediction software used to rank documents by how:

a. reviewer tagged relevant, but software thought was highly irrelevant.

b. reviewer tagged non-relevant, but software ranked highly relevant.

selected the biggest outliers, and had SME check top 10% of training documents. Ranking re-run based on changed values and plotted as separate line on yield curve.

E. Plotting the Differences: Expert vs. Reviewer Yield Curves

Yield Curve -

x-axis - percentage of documents reviewed.

y-axis - percentage of relevant documents found.

The gray line representing linear review merely shows that the percentage of relevant documents found will increase at a constant rate when documents are review randomly. So when 20% of documents have been reviewed, 20% of responsive documents will have been found. On the first issue tested, review required reviewing a slightly higher percentage of documents to get to a 80% recall rate but beyond 80% recall, SMEs and reviewers perform equally well. The rankings generated by the expert-only review were almost identical to the rankings produced by the review team with QC assistance from the expert. On the second issue tested, the three methods (the reviewers, the experts, the review with SMEs correcting some reviewer decisions) performed equally well. On the third issue, expert and review methods were equally as good, but the 'expert QC' method did not perform as well getting from 80% to 90% recall. The fourth issue actually showed the expert significantly underperforming the reviewers. Generally speaking all three methods got these results:

Issue 1 - 20% of documents must be reviewed to get 80% recall.

Issue 2 - 50% of documents must be reviewed to get 80% recall.

Issue 3 - 30% of documents must be reviewed to get 80% recall.

Issue 4 - 42% of documents must be reviewed to get 80% recall - except for the expert working only had to review 65%.

Results were obtaining using Insight Predict, Catalyst's proprietary algorithm, but still suggest that notion that only a SME can train a system may not be correct.

SMEs aren't always available and bill higher rates. Catalyst suggests experts should interview witnesses and find important documents to feed into the system, not training predictive coding systems.


bottom of page