Interpretable AI has benefits beyond compliance
Making AI models more interpretable has a wide range of benefits and shouldn't just be thought of as a compliance exercise, experts at the H2O World conference said.
Interpretable AI is catching attention for helping enterprises address regulatory concerns raised by GDPR and other regulations.
But interpretability is not just about compliance. Experts at the H2O World conference in San Francisco said interpretability is also important for improving the adoption of AI models by decision-makers and front-line experts. Interpretability enables these users to identify and work around blind spots in machine learning models.
"If you develop a system that people cannot buy into, that is a huge problem," said Rajesh Iyer, vice president and head of AI Center of Excellence at Capgemini.
When Capgemini engineers build an AI tool to improve decision-making in an area like underwriting or loan approval, one of their primary goals is accuracy. But it is important to be able to explain to users what is going on and let them override a recommendation when it doesn't make sense.
"I believe in creating a close human-in-the-loop AI system," Iyer said. His team starts by building a machine learning model with offline data. Then, they put it into preproduction to give users a chance to understand how it works from a prediction and explanation standpoint. The users also need to have the ability to accept the decisions or roll them back.
Capgemini also likes to put new models through the bias wringer. "There is a lot of stuff that looks great, but there is bias on the other side," Iyer said. If they can't explain a model, they just toss it out.
Iyer said he typically asks Capgemini consulting clients why a variable is in a particular model. This is just good data science that experts like Iyer were doing in creating financial models before deep learning took off. Making sense of large, messy data sets without much supervision or direction is one of the strengths of deep learning, but this approach can separate the data scientist from an understanding of the data.
"This is an exercise you need to go through in consulting if you think you are going to convince clients you know what you are doing," he said.
Limiting the impact of flaws
One common myth in AI is that enterprises sacrifice accuracy for interpretability. However, interpretability can be baked into modern AI techniques, like deep learning, provided developers and data scientists frame the machine learning models correctly.
Agus SudjiantoExecutive vice president and head of corporate model risk, Wells Fargo
Agus Sudjianto, executive vice president and head of corporate model risk at Wells Fargo, said that seeking to wring every drop of accuracy out of models can sacrifice interpretability. A much better goal is to implement the AI algorithms in a way that makes it easy to see what can go wrong with the model and how to deal with those problems. Understanding the weaknesses of a model and managing them are crucial to Wells Fargo's business. "We can build very sophisticated models that can also be interpretable," he said.
There are several issues beyond regulation that drive interpretable AI for Wells Fargo. "For us, the danger is not only regulation," Sudjianto said. "We also need to think about making decisions that impact our customers."
It's also important to think about the relevance of the model. For example, a model created with data from the autumn may work well but be less accurate in other seasons of the year. Human bankers may possess the experience to understand these trends. They can look at a model and see if the effect of a variable in the model is important, and if there is a weakness, they can know how to work around it.
Many times, Sudjianto will run a model and find that it works well on sample data but fails in production. For example, he might spot a previously overlooked correlation that warrants further exploration. But, on further analysis, he discovers it's just a problem with the data. These types of problems with data only become apparent when models are interpretable and he can see where problems arise from.
"We are dealing with a large data set, and sometimes, we don't know if it is clean," he said. As a result, interpretable AI is not just about building better models, but also identifying issues with the data sets that are being accumulated.
Explainability important for good business
If a consumer wants to borrow money, fair lending laws require that a creditor either grant the consumer the money or explain in easily understandable terms the reason it chose not to. The purpose is fairness, but it also gives the person an opportunity to dispute the decision.
When a creditor uses an AI model to make an approval decision, the result is often based on the probability of an outcome. If the certain factors fall below a certain threshold, the loan is declined. Then, the creditor needs to look at why it was declined. Explainability requires looking at the factors that drove the decision.
Marc Stein, founder and CEO at Underwrite.ai, which creates models for financial companies, said he believes this kind of transparency is important, even when not required by regulators. Underwrite.ai is helping to make loans in the Philippines and Mexico and makes a point of telling consumers why a loan was rejected even though regulations are more lax than in the EU or U.S. "You don't have to do it for regulators or under threat of penalty," Stein said.
Different tiers of explainability
Interpretable AI and explainability are related ideas but not entirely synonymous. Interpretability relates to the ability to predict what will happen as variables change. Explainability describes the ability to communicate the underlying mechanics of a model in the language of the user.
There are different levels of explainability to consider with AI, and the degree of explainability needed for various applications changes depending on what type of user an explanation is intended for.
Consumers want to know what aspects of their financial behavior influenced a loan decision. One approach for explaining the underlying decision to a consumer might be to focus on the key correlation in a model that had the biggest impact on a decision. For example, a late payment might be the biggest factor in a rejected loan application.
Another level of explainability lies in convincing boards of companies to adopt a new model. Stein has found that it's useful to start with something executives already agree with. For example, in the credit industry, executives are used to correlating a FICO credit score with loan risk. This score produces good predictions until a person's credit score drops below 720. Underwrite.ai uses additional variables to improve the accuracy of predicting loan risk for this class of customers. Showing the performance of these more complex models against traditional models makes it easier to explain to financial executives how a better model could impact their business.
At the higher end, data scientists need to consider more nuanced factors in determining the value of a new model. "If you cannot defend it in front of Ph.D.s, that model will not get approved," Wells Fargo's Sudjianto said.