NIST Asks A.I. to Explain Itself

NIST Asks A.I. to Explain Itself

Credit: B. Hayes/NIST




NIST scientists have proposed four principles for judging how explainable an artificial intelligence's decisions are.

It’s a question that many of us encounter in childhood: “Why did you do that?” As artificial intelligence (AI) begins making more consequential decisions that affect our lives, we also want these machines to be capable of answering that simple yet profound question. After all, why else would we trust AI’s decisions?


This desire for satisfactory explanations has spurred scientists at the National Institute of Standards and Technology (NIST) to propose a set of principles by which we can judge how explainable AI’s decisions are. Their draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), is intended to stimulate a conversation about what we should expect of our decision-making devices. 


The report is part of a broader NIST effort to help develop trustworthy AI systems. NIST’s foundational research aims to build trust in these systems by understanding their theoretical capabilities and limitations and by improving their accuracy, reliability, security, robustness and explainability, which is the focus of this latest publication. 


The authors are requesting feedback on the draft from the public — and because the subject is a broad one, touching upon fields ranging from engineering and computer science to psychology and legal studies, they are hoping for a wide-ranging discussion.


“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” said NIST electronic engineer Jonathon Phillips, one of the report’s authors. “But an explanation that would satisfy an engineer might not work for someone wit ..

Support the originator by clicking the read the rest link below.