ktsdesign - stock.adobe.com

Working toward explainable AI

Many constituencies are on a push toward explainable AI algorithms, as AI technologies become more widespread, powerful and relied upon than ever before.

WASHINGTON, D.C. -- Consumers, policymakers and businesses are on a push to make AI algorithms more explainable.

Many algorithms operate inside of a black box: a closed off system that cloaks the algorithms' inner workings from users, and sometimes, from their own creators. As AI spreads to fields that require strict regulations and procedures, including healthcare and law, the lack of explainable AI has become a problem.

Regulators can't ensure that a system designed to treat a medical issue works fairly, safely and accurately if they can't examine the data it draws on and how that data is processed. That's problematic.

What explainable AI is

Generally, explainable AI refers to an AI system that humans can understand, at least at a high level. It's a simple concept, but in practice, it's proved to be fairly difficult to achieve.

Complex systems -- and AI systems are continually getting more complex -- typically are trained on a massive corpus of data, far more data than a single person, or even a group of people, can sort through themselves. That data can contain errors or biases that people don't detect, but that is picked up by the AI system, which then learns the incorrect information as if it were true.

The algorithms empower the system to learn and make decisions that are increasingly more complex as researchers and software engineers develop faster, more powerful systems. These algorithms, with their many webs and interconnecting pieces, can become confusing and too complex for even their creators to understand.

In deep learning models, particularly, this a major problem, said Raj Minhas, vice president and director of interaction and analytics laboratory at PARC, during a keynote at the AI World Government conference here.

The systems contain hundreds of millions of parameters, he said, often making them effective, but difficult to understand.

[Deep learning is] great when it works.
Raj Minhas Vice president and director of interaction and analytics laboratory, PARC

"It's great when it works. But when it doesn't work, it's completely inscrutable," Minhas said.

As more AI vendors pop up in an already highly competitive field, many companies refuse to let users or experts look at the inner workings of their models, claiming it is proprietary technology.

Altogether, achieving explainable AI becomes a difficult task.

Who explainable AI is for

While a general definition of explainable AI exists, the term "explainable" is ambiguous, and can mean different things to different people, said Lindsey Sheppard, associate fellow at the International Security Program at the Center for Strategic and International Studies in Washington, D.C.

Speaking in a keynote at the government AI conference, Sheppard noted that "no one size fits all" regarding AI.

AI World Government, explainable AI, trustable AI, Raj Minhas
Raj Minhas, director of PARC's interaction and analytics laboratory, talks explainable AI at AI World Government.

An AI system might be understood by a data scientist but might be too complex for a business user to understand, she said. Organizations need to recognize and address different levels of explainability.

"What is the appropriate level of trust that has to be met, or the appropriate level of understanding that has to be met across your organization?" Sheppard said.

That's a question business leaders will need to ask themselves when considering explainable AI.

The need for explainability

In his keynote, Minhas referred to an article published earlier this year by the MIT Technology Review about how the U.S. justice system is using AI to help determine former inmates' potential for recidivism.

The article describes how AI systems, using data from numerous points, help determine if an ex-convict is at risk of offending again. A judge then uses that information, along with other factors, to help make decisions such as the severity of a sentence.

The algorithm, not easily explainable, could make a wrong assessment, and a judge would essentially have no real way of knowing.

These are the kinds of problems the AI community is facing, Minhas said, and, as AI is used more and more widely, the problems might become more widespread.

It will be difficult to balance explainability with a push to build better, faster AI systems, Minhas said. More conversations about AI ethics might be one way to try to fix the problem.

Already, governments, including the U.S. government, have issued guidelines for AI calling for more explainable AI systems. Businesses, too, have formed groups to advocate for more ethics in and regulation of AI.

The AI World Government conference was held June 24 to 26 at the Ronald Reagan Building and International Trade Center.

Next Steps

Explainable AI systems build trust, mitigate regulatory risk

Dig Deeper on AI business strategies