Laurent - stock.adobe.com
Explainability is no solution to problem of bias in AI
Explainability has been touted as a solution to the problem of biased AI models, but experts say that approach only gets you part of the way to bias-free applications.
Most enterprises now understand that they need to be wary of the problem of bias in AI models, but understanding bias itself is difficult.
"Eliminating one source of bias can create another source of bias," said Anthony Scriffignano, senior vice president and chief data scientist at Dun & Bradstreet. "We need to understand bias, yes, but not try so hard to drive it out that we're now creating new problems."
In a panel discussion at the AI World Conference & Expo in Boston, Scriffignano talked about how deep learning models, which underpin most AI applications today, are typically opaque and interpreting why or how they made a decision can be challenging. This makes understanding sources of bias and how they manifest in model outputs particularly tricky.
Explainability has limits
Making outputs of deep learning models more explainable and easier for humans to interpret has gained prominence over the last year or so. The idea is to encode models to output some kind of reason code along with the primary component of their recommendation that would explain the principal factors that influenced the model's decision.
This could be valuable when trying to understand how bias in AI applications enters into model decisions.
But deep learning models make decisions or recommendations based on an accumulation of hundreds of processing layers considering thousands or even millions of variables in a data set. The process they use to get to an output is complex and rarely easy to sum up in a human-explainable way.
Anthony ScriffignanoDun & Bradstreet
"If we could fully explain what happens at every layer of a deep neural network, it wouldn't be enough," said Heather Ames Versace, co-founder and COO of Boston-based deep learning software company Neurala.
Speaking at the conference, Ames Versace said understanding how bias infects deep learning requires a more comprehensive view of everything that goes into a model.
It starts with understanding how data is collected and ends with looking at how an organization puts a model's recommendations into action. Explaining how a model arrived at its answer is only a small part of understanding bias in the use of deep learning.
"You have to look at the system as a whole," Ames Versace said.
Respect for users is key
In using data about human subjects, enterprises could use well-established guidelines developed by researchers and academics to guide their deep learning applications, said Raj Minhas, vice president and director of the interaction and analytics laboratory at Parc, a subsidiary of Xerox that conducts research in machine learning and other areas.
In a panel discussion at the conference, Minhas said using these established research guidelines -- which focus on ideas like informed consent, explaining to subjects how their data will be used and granting subjects some measure of control over their data -- could do more to eliminate bias in AI applications than making models more explainable.
Even if the enterprise doesn't know how or why a model delivered a specific answer, these guidelines can ensure that answer is used in an equitable way.
"We don't have to reinvent the wheel just because data science is a part of our daily lives," Minhas said. "There are processes in place for how research is done on humans. We can say, 'How does this process affect people?'"
The conference took place Dec. 3 to 5 at Seaport World Trade Center.