By Patrick McDonald,
Chief Architect, Advanced Analytics at Clarity Insights. Patrick brings 23 years of experience to over 50 data science and advanced analytics projects and has delivered $4.4 billion to client bottom lines.

“Care to explain that me?” is one of the great intimidating conversation starters in the business world. It puts the recipient on notice that you, for one, are not impressed with his or her point of view on a given topic. But, what if you have to ask “Care to explain that to me?” to an Artificial Intelligence model? The model isn’t intimidated. In most cases, it has no way of explaining itself to you even if it wanted to.

The concept of Explainable Artificial Intelligence (XAI) offers a solution. Gartner sees a strong future for Explainable AI, a collection of models that make AI more transparent. The analyst firm included it in their recent report, “Top 10 Data and Analytics Technology Trends That Will Change Your Business.”

 


 
2019 Market Guide for Data and Analytics by Gartner

Get the guide

 


 

Understanding AI’s Trust Problem

We interact with AI in our daily lives, often without realizing it. Software that emulates human thinking is popping up in travel apps, route-mapping utilities and ubiquitous online advertising algorithms, to name just a few examples. The pervasiveness of AI should not inure us to its power, though. It’s one thing to rely on AI to suggest a good restaurant when we’re out of town. It would be another when AI decides your child cannot have a medication to treat a serious illness.

As AI starts to play a role in functions and business decisions that truly affect us, trust emerges as a critical issue. AI technologies have the potential to make decisions or prompt real world actions that can affect our brands, trigger legal liabilities, put our companies out of compliance with the law or even put people’s lives in danger. AI can embody racial bias and immoral thinking. It’s just a machine. Machines are not good at understanding the complete human equation embodied in decisions they render.

Looking critically at AI, you can (or should) see how essential it is for people to trust the AI model. Indeed, Gartner predicts that by 2023, over 75% of large organizations will hire artificial intelligence specialists in behavior forensic, privacy and customer trust to reduce brand and reputation risk. This requires transparency in the AI model, however, which is often lacking.

 

Struggling with Transparency in AI

AI models can be overwhelmingly complex. Even experts have difficulty parsing just how they function. AI also tends to generate impossible amounts of data—information overload, as if it were an illness. The metaphor is useful in its own way. Too much data and AI complexity is unhealthy.

 

The AI Governance Solution

The solution to the problem that arises with opaque, overly complex “black box” AI models is known as AI governance. Gartner describes it as “The process of assigning and assuring organizational accountability, decision rights, risks, policies and investment decisions for applying AI, predictive models and algorithms.” Explainable AI is a critical component for delivering much-needed AI governance.

 

Defining Explainable AI

XAI offers a technical foundation in support of AI governance. It’s a set of capabilities that describes how an AI model works. It explains the model, providing transparency regarding the model’s “thought process.” So, it’s no longer, “Why did the model deprive my child of this medicine?” Instead, it’s, “Oh, I see. According to Explainable AI, the model discovered that the medicine prescribed for my child has potential side effects when combined with another of my child’s medications.”

Explainable AI reveals an AI model’s strengths and weaknesses. It predicts the model’s likely behavior, while highlighting potential biases. XAI frameworks can usually articulate the inner workings of an algorithmic decision-making model—leading to better understanding of the model’s accuracy, fairness, accountability and stability. Used correctly, explainable AI should lead to increased adoption of AI through increased trustworthiness.

 

Implementing XAI

Explainable Artificial Intelligence sounds great. How do you actually make it happen?

A number of promising commercial solutions and open source initiatives are taking on the challenge. The U.S. Defense Advanced Research Projects Agency (DARPA), for example, runs an explainable AI program that produces more explainable models by means of Machine Learning (ML). Their approach enables people to understand, appropriately trust and manage AI solutions. UC Berkeley, UCLA, MIT, Oregon State, Rutgers, SRI institute, PARC and others are bringing out explainability solutions that examine deep neural nets (DNNs).

Examples of companies with commercial XAI offerings include data science platforms like DataRobot Labs and H2O.ai. These solutions leverage deep learning and automatically generate explanations of AI models using natural (i.e. human) language. Tazi.ai has an interactive solution that visualizes patterns based on AI models for use by nontechnical business users. DarwinAI has a tool that provides granular insights into neural network performance. Salesforce Einstein Discovery explains model findings, alerting users to possible bias in data.

 

Conclusion

It’s sufficiently early in the lifecycle of corporate AI to put effective AI governance into effect before risks cause real business problems. Explainable AI can be a key element of the AI governance process. With XAI, it is possible to gain the business benefits of AI while reducing the risks inherent in deploying untrustworthy or opaque AI models. Commercial solutions are coming onto the market. Open source initiatives also provide good options for explainable AI.

The broader challenge, which we can help you with, is to identify the opportunities for AI in your business along with the attendant risks. Then, working closely with your business, we can design and help implement an AI governance program that makes AI a useful, but trusted part of your business.

You may also like:

Artificial Intelligence

The Rise of Continuous Intelligence and How It’s Changing Analytics and BI

Continuous intelligence, according to the Gartner report, Top 10 Data and Analytics Technology Trends That Will Change Y...

Data Science Artificial Intelligence Natural Language Processing

Using NLP and AI to Detect Fake News with 99% Accuracy

Senior Associate Consultant Eric Chan shares his how-to for leveraging AI and NLP to detect fake news.

Artificial Intelligence

How AI is driving automated buying forward

Automation is ushering in a new age of digital advertising. In an industry so closely tied to human condition, known for...