Getty Images

Why transparency in AI matters for businesses

To ensure model accuracy, businesses need to understand why their machine learning models make their decisions. Certain tools and techniques can help with that.

When it comes to simple AI programs, such as a recommendation engine, the average user can likely piece together the thinking behind it. Watching an action movie leads to the recommendation of more action movies. Easy.

But most AI applications, especially in the enterprise, are far more advanced. Many offer little to no insight into what's happening under the hood for the typical business user.

Using certain tools and techniques for transparency in AI, however, businesses can better understand how their machine learning and deep learning models work -- and ensure they are getting accurate, unbiased results.

The importance of explainable AI

Explainable AI, also called XAI, generally refers to techniques and tools that help people better understand why a machine learning or deep learning model makes the predictions it does.

Transparency in AI is often used interchangeably with explainable AI, but it focuses more on ensuring that a model is open and visible. With a transparent, or interpretable, machine learning model, a data scientist could see the mechanisms that make the model work -- including the individual algorithms at work and the data going into the model.  

As organizations increasingly turn to AI to make important decisions, they need to make sure models spit out accurate predictions. Explainability is the key.

Take the logistics industry, for example.

In addition to automated tools, logistics planners need to rely on gut feeling and manual operations in their day-to-day work, said Marc Meyer, chief commercial officer at Transmetrics, a software provider of an AI-powered tool that analyzes, models and predicts transport flows.

In the logistics industry, AI tools can quickly consider weather, traffic, fuel, road conditions and other relevant data to best plan a shipping route, for example -- something a human planner would struggle to do.

But, the planner still needs to understand and modify the AI system's suggestions, Meyer noted. As powerful as an AI-powered logistics system may be, it cannot "calculate the influence of a human factor and other unpredicted events on the forecasting and planning," he said.

An explainable system allows a planner to look at the data the system uses to make its decisions, as well as how it uses that data. 

New approaches to achieving data transparency

Despite the importance of explainable AI systems, and despite increased interest in such systems over the past year, it's still early days.

"Most firms are still coming to terms with implementing explainability within their models, given how early the stage of development of tools are in this area," said Nick McQuire, chief of enterprise research at analyst firm CCS Insight.

Still, through recent surveys, CCS Insight found tools that support responsible and explainable AI are among the top requirements for investment in technology, McQuire said.

It makes sense: The number of firms running production machine learning applications has risen sharply over the past year, McQuire said.

Companies are seeing that more regulation around AI, especially in the European Union, is imminent and will likely require certain standards in place for AI transparency. Businesses, too, are beginning to see how an unexplainable AI model that behaves badly, such as a model that produces biased outcomes, can cause PR nightmares and potential legal problems, McQuire said.

How to get started with explainable AI

Building explainability and transparency in AI into the work environment starts with education and communication.

It's important to make sure the AI developers and the business leaders maintain a constant flow of communication, said Bryce Murray, director of technology and data science at Aperio Consulting Group.

When creating and implementing an algorithm, "we need to understand what the human needs to hear to make the best decision," he said. To do that, an algorithm's output needs to be made in a language a business leader can understand.

That's not as easy as it might seem, however.

The business and the technology sides often speak two different languages, said Kerry Goyette, founder and president of Aperio Consulting Group. The business leaders speak business outcome language, while people in technology speak in more technical languages.

"They often don't understand each other," Goyette said.

The two sides need to communicate with each other and mentor each other, she said. The technology side needs to understand what the business leaders are trying to measure, while the business leaders should better understand how the technology works.

That communication is key to creating AI that business users can understand.

"AI does a great job of taking data and turning it into information, but we have to remember: The human has to get the insights, and they have to have the wisdom to make the judgements," Goyette said.

Tools and products for transparent AI

"It's a bit of a misnomer that AI is challenging to explain," said Forrester Research analyst Mike Gualtieri. "Actually, AI is only challenging to explain if businesses aren't using XAI techniques."

He pointed toward Shapley values, the most popular technique to explain the output of a model.

Derived from game theory, Shapley values can help determine how much each player contributes to a game by looking at the total outcome of the game and breaking it down by each player. In machine learning models, developers can use Shapley values to determine the amount each feature contributes to a model's outcome. 

AI is only challenging to explain if businesses aren't using XAI techniques.
Mike GualtieriAnalyst, Forrester Research

Shapley Additive Explanations (SHAP) is an open source tool found in many commercial explainable AI products that uses Shapley values to explain model predictions.

By understanding the weight each feature has on the outcome of a model, researchers can better understand why the model made a prediction. Researchers can see if a model's prediction relies too much or too little on a single piece of data, for example, as either scenario would throw off the model's accuracy.

Let's say a bank uses a machine learning model to determine whether to approve or deny someone a loan. The finance industry is heavily regulated, so the bank would want to make sure its model makes fair, legal decisions.

Using SHAP, the bank could determine if the model uses a disallowed variable to decide if the bank should approve an individual's loan or not, Gualtieri said.

By using SHAP before putting the loan model into use, "the software using the model can simply bypass using the machine learning model to make the loan decision and reroute to a different rules-based process or a human reviewer," he said.

As the need for transparency in AI grows, so does the number of products aiming to help make models more transparent.

A flood of open source tools, such as Google's What-If Tool and IBM's AI Explainability 360, have come out over the last few years. Plus, the big tech companies have worked them into their larger AI platforms.

Over the past year alone, Microsoft launched new interpretability tools in Azure Machine Learning; AWS released Amazon SageMaker Clarify, a tool for mitigating AI bias; and Google released Vertex Explainable AI for its Vertex AI MLOps platform.

These tools, like SHAP, can help businesses understand the weight that individual features have on model outcomes, test for various machine learning fairness metrics, and help data scientists better visualize their model outcomes.

"Although each has a different set of capabilities around some of the core areas of explainability -- such as black box explanations, AI inference, glass box models, feature importance and visualisations, for example -- they are good starting points for businesses," McQuire said.

Still, he noted, it's too early to tell which product customers will gravitate toward, or which one will dominate the future. Either way, organizations should keep an eye on explainable AI tools and technologies.  

 

Dig Deeper on AI business strategies