top of page

NIST Explains AI

Explainable Artificial Intelligence is a term used to describe a method for allowing the results of AI systems to be understood by humans. NIST’s NISTIR 8312 Four Principles of Explainable Artificial Intelligence establishes these four principles for explainable AI.


Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.


Meaningful: Systems provide explanations that are understandable to individual users.


Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.


Knowledge Limits: The system only operates under conditions for which it was designed

or when the system reaches a sufficient confidence in its output.

 
 

Sean O'Shea has more than 20 years of experience in the litigation support field with major law firms in New York and San Francisco.   He is an ACEDS Certified eDiscovery Specialist and a Relativity Certified Administrator.

The views expressed in this blog are those of the owner and do not reflect the views or opinions of the owner’s employer.

If you have a question or comment about this blog, please make a submission using the form to the right. 

Your details were sent successfully!

© 2015 by Sean O'Shea . Proudly created with Wix.com

bottom of page