4 explainable AI techniques for machine learning models

At its core, AI is a complex modeling process with layers of information. In order to be able to explain the algorithm's decision-making process, start with its input data.

When machine learning models deliver problematic results, it can often happen in ways that humans can't make sense of, and this becomes dangerous when there are no limitations of the model, particularly for high-stakes decisions. Without straightforward and simple techniques to achieve explainable AI, organizations will continue to struggle in implementing machine learning algorithms.

Explainability in AI refers to the process of making it easier for humans to understand how a given model generates the results it does -- and how to know when the results should be second-guessed. Specifically, explainable AI discloses the following:

  • the program's strengths and weaknesses;
  • the specific criteria the program uses to arrive at a decision;
  • why a program makes a particular decision, as opposed to alternatives;
  • the level of trust that's appropriate for various types of decisions;
  • what types of errors the program is prone to; and
  • how errors can be corrected.

AI developers need to incorporate explainable AI techniques into their workflows as part of their modeling operations. To achieve explainable AI, they should keep tabs on the data used in models, strike a balance between accuracy and explainability, focus on the end user and develop key performance indicators (KPIs) to assess AI risk. But applying these explainable AI techniques won't be easy.

Off-the-shelf explainable AI techniques a work in progress

In traditional rules-based AI systems, explainability in AI was part of the model because humans would typically handcraft the inputs to outputs. But deep learning techniques using semiautonomous neural networks generate models that don't map to traditional human concepts that relate to the intended goal.

Researchers are working to build learning algorithms -- that is, algorithms that imitate how humans think -- to generate explainable AI systems. Currently, however, most of the dominant learning algorithms do not yield interpretable AI systems, said Ankur Taly, head of data science at Fiddler Labs, an explainable AI tools provider.

One of the limitations of off-the-shelf learning algorithms is that they attempt to explain the behavior of AI systems post hoc, rather than alter their structure. Other machine learning model properties, like accuracy, are straightforward to measure, but there are no corresponding simple metrics for AI explainability. Thus, the quality of an explanation or interpretation of an AI system needs to be assessed in an application-specific manner. It's also important for practitioners to understand the assumptions and limitations of the explainable AI techniques themselves.

"While it is better to have some transparency rather than none, we've seen teams fool themselves into a false sense of security by wiring an off-the-shelf [explainable AI] technique without understanding how the technique works," Taly said.

4 explainable AI techniques

Here are four explainable AI techniques that will help organizations develop more transparent machine learning models, while maintaining the performance level of the learning.

1. Start with the data

The results of a machine learning model could be explained by the training data itself or how a neural network interprets a data set. Machine learning models often start with data labeled by humans. Data scientists can sometimes explain the way a model is behaving by looking at the data it was trained on.

"What a particular neural network derives from a data set [is] patterns that it finds that may or may not be obvious to humans," said Aaron Edell, principal business development manager at AWS.

But it can be hard to understand what good data looks like. Biased training data can show in up a variety of ways. A machine learning model trained to identify sheep might only be derived from pictures of sheep farms, causing the model to misinterpret sheep in other settings or to identify white clouds on a farm as sheep. Likewise, facial recognition software can be trained on company headshots, but if those headshots are mostly of Caucasian men, the data is biased.

One good practice is to train machine learning models on data that should be indistinguishable from the data the model will be expected to run on. For example, a face recognition model that identified how long Jennifer Aniston appears in every episode of Friends should be trained on frames of actual episodes rather than on Google image search results for Jennifer Aniston. In a similar vein, it's OK to train models on publicly available data sets, but generic, pretrained models as a service will be harder to explain -- and to change, if necessary.

2. Balance explainability, accuracy and risk

The real problem with implementing explainable AI techniques is that there are major tradeoffs when balancing accuracy and transparency in different types of AI models, said Matthew Nolan, senior director of product marketing, decision sciences, at Pegasystems. More opaque models may be more accurate but fail the explainability test. Other types of models, like decision trees and Bayesian networks, are considered more transparent but are less powerful and complex.

"These models are critical today as businesses deal with regulations such as GDPR that require explainability in AI-based systems, but this sometimes will sacrifice performance," Nolan said.

Focusing on transparency can cost a business, but turning to more opaque models can leave a model unchecked and might expose consumers, customers and the business to additional risks or breaches.

To address this gap, platform vendors are starting to embed transparency settings into their AI tool sets. This can make it easier for companies to adjust the acceptable amount of opaqueness or transparency thresholds used in their AI models. Enterprises can adjust a machine learning model based on their needs or on corporate governance policy, so they can manage risk, maintain regulatory compliance and ensure customers a differentiated experience in a responsible way.

Data scientists should also identify when the complexity of new models is getting in the way of explainability. Yifei Huang, data science manager at sales engagement platform Outreach, said there are often simpler models available for attaining the same performance, but machine learning practitioners have a tendency toward using more fancy and advanced models.

3. Focus on the user

AI explainability means a different thing to a highly skilled data scientist than to a frontline worker on the job. The task of implementing explainable AI techniques is not just to foster trust in explanations, but also to help the end users make sound decisions, said Ankur Teredesai, CTO and co-founder at KenSci, an AI healthcare platform.

Often, data scientists make the mistake of thinking about explanations from the perspective of a computer scientist, when the end user is a domain expert who may need just enough information to make a decision. For a model that predicts the risk of a patient being readmitted, a physician may want an explanation of the underlying medical reasons, while a discharge planner may want to know the likelihood of readmission in order to plan accordingly.

Teredesai said there are still no general guidelines for explainability, particularly for different types of users. It's also challenging to integrate these explanations into the machine learning and end-user workflows. End users typically view AI explainability as possible actions to take based on a prediction, rather than as an explanation of the logic behind a prediction, and this requires striking the right balance between focusing on the performance level of the prediction and the fidelity of the explanation.

When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application.
Mark StefikResearch fellow, PARC, a Xerox company

There are a variety of tools for implementing explainability on top of machine learning models that generate visualizations and technical descriptions, but these can be difficult for end users to understand, said Jen Underwood, principal consultant at Impact Analytix. Supplementing visualizations with natural language explanations is a way to partially bridge the data science literacy gap.

Another good practice is to have humans in the loop to evaluate the explanations to see if they make sense to a human, said David Fagnan, director of applied science on the Zillow Offers Analytics team. This can help lead to more accurate models through key improvements, including model selection and feature engineering.

4. Use KPIs for AI risks

Enterprises should consider the specific reasons to use explainable AI techniques when evaluating machine learning models. Teams should, first and foremost, establish a set of criteria for KPIs for AI risks, including comprehensiveness, data privacy, bias, fairness, explainability and compliance, said Joydeep Ghosh, Ph.D., chief scientific officer at AI vendor CognitiveScale. It's also useful to generate relevant metrics for key stakeholders.

Organizations are beginning to establish measurement targets that determine acceptable operating values. Nonprofit AI Global, for example, has established the AI Trust Index, a scoring benchmark for explainable AI that is like a FICO score. The scoring system enables firms to not only establish their own best practices, but also compare their enterprise's AI practices against industry benchmarks. Vendors are starting to automate this process with tools for automatically scoring, measuring and reporting on risk factors across the AI operations lifecycle.

Mark Stefik, research fellow at PARC, a Xerox company, said explainable AI techniques and tools are getting better, but he cautioned the technology is at an early research stage with proof-of-concept prototypes. There are substantial technology risks and gaps in machine learning and in AI explanations, depending on the application.

"When someone offers you a silver bullet explainable AI technology or solution, check whether you can have a common-grounded conversation with the AI that goes deep and scales to the needs of the application," Stefik said.

Dig Deeper on Machine learning platforms