Scaling AI: 3 Reasons Why Explainability Matters 

Scaling AI: 3 Reasons Why Explainability Matters 

As artificial intelligence and machine learning-based systems become more ubiquitous in decision-making, should we expect our confidence in the outcomes to remain like that of its human collaborators? When humans make decisions, we’re able to rationalize the outcomes through inquiry and conversation around how expert judgment, experience and use of available information led to the decision. Unfortunately, engaging in a similar conversation with a machine isn’t possible yet. To borrow the words of former Secretary of Defense Ash Carter when speaking at a 2019 SXSW panel about post-analysis of an AI-enabled decision, “'the machine did it' won’t fly.”  


As we evolve human and machine collaboration, establishing trust, transparency and accountability at the onset of decision support system and algorithm design is paramount. Without it, people may be hesitant to trust AI recommendations because of a lack of transparency into how the machine reached its outcome. Fortunately, we’re beginning to see some light at the end of the “AI is a black box” tunnel as research on AI explainability is commercialized in the market. These early-stage AI explainability solutions are essential for leaders to ensure trust in, and to govern and scale, their AI applications across the enterprise.   


Defining Explainable AI 


As AI becomes more embedded into our lives, a crucial part of our adoption journey stems from our ability to understand how AI makes decisions, why it comes to certain conclusions, and how to be confident in the results achieved. That’s explainability in a nutshell—a new “-ility” added to quality attributes used to evaluate the performance of a system. Current explainable AI solutions attempt to show how AI models make decisions in terms that humans understand, translating the process AI uses to transform data into real insights and value. 


Explainability ..

Support the originator by clicking the read the rest link below.