The Grossman-Cormack Glossary of Technology-Assisted Review defines 'elusion' as, "the fraction of Documents identified as Non-Relevant by a search or review effort that are in fact Relevant." A low elusion value can be a sign that document review is being performed correctly.
After performing assisted review (or TAR) in Relativity, an Elusion Test can be run as part of an Active Learning Project. An Active Learning project shows reviewer statistics, displaying 'document rank distribution' and 'prioritized review progress'. The former uses a bar chart showing how Relativity's assisted review model ranking of the relevancy of documents compares with coding decisions. The 'prioritized review progress' is a line graph tracking how many reviewed documents are found to be responsive. (For this purpose a relevant document is one the model predicts to be relevant and is then confirmed as relevant by a document coder.)
When the relevancy rate declines and levels off, Relativity recommends performing an elusion test. An elusion test appears as an option in an active learning project above the chart and the graph. At least 20 document reviewers must be added for an elusion test. The admin sets a rank cutoff - a predicted relevancy value below which the test will sample non-responsive documents that have not been coded. Either a fixed sample size can be used, or a statistical sample can be calculated which will vary in size in order to achieve a given confidence or margin of error. Documents coded by human reviewers will not be included in the sample set.
The confidence level is the chance that the sample set will be representative of the complete document set. The margin of error percentage is how much the rate in the sample will vary from the rate in the full set.
The elusion test will create a new queue of documents for reviewers to code. Once all documents in the sample have been reviewed, Relativity will generate an elusion rate (how many documents are coded relevant).