NIST Releases Core Principles to Judge ‘Explainable AI’

NIST Releases Core Principles to Judge ‘Explainable AI’

National Institute of Standards and Technology scientists carefully crafted and proposed four fundamental tenets for determining precisely how explainable decisions made by artificial intelligence are.


The draft publication released Tuesday—Four Principles of Explainable Artificial Intelligence—encompasses properties of explainable AI and is “intended to stimulate a conversation about what we should expect of our decision-making devices,” according to the agency. It’s also the latest slice of a much broader effort NIST is steering to promote the production of trustworthy AI systems.  


“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” NIST Electronic Engineer and draft co-author Jonathon Phillips said in a statement. “But an explanation that would satisfy an engineer might not work for someone with a different background. So, we want to refine the draft with a diversity of perspective and opinions.”


NIST’s four principles of explainable AI stress explanation, meaning, accuracy and what authors deem “knowledge limits.” As the agency states, they are: 


AI systems should deliver accompanying evidence or reasons for all their outputs.
Systems should provide explanations that are meaningful or understandable to individual users.
The explanation correctly reflects the system’s process for generating the output.
The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output. 

A final caveat included by the agency notes the last principle implies that “if a system has insufficient confidence in its decision, it should not supply a decision to the user.” 


NIST’s draft also includes a call for pate ..

Support the originator by clicking the read the rest link below.