top of page

NIST Explains AI

Explainable Artificial Intelligence is a term used to describe a method for allowing the results of AI systems to be understood by humans. NIST’s NISTIR 8312 Four Principles of Explainable Artificial Intelligence establishes these four principles for explainable AI.

Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.

Meaningful: Systems provide explanations that are understandable to individual users.

Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.

Knowledge Limits: The system only operates under conditions for which it was designed

or when the system reaches a sufficient confidence in its output.

bottom of page