KOHb - Getty Images

Explainable AI systems build trust, mitigate regulatory risk

Explainable AI will look different depending on whether a business is using a predictive, generative or agentic AI system.

Building a trusted artificial intelligence system starts with explaining why the AI system reaches certain decisions. Explainable AI goes a long way toward addressing not only trust issues within a business, but regulatory concerns as well.

According to research conducted by analyst firm Forrester Research, many business leaders have concerns about AI usage, specifically generative AI, which has grown in popularity since OpenAI's ChatGPT release in 2022.

AI has a trust problem, Forrester analyst Brandon Purcell said, which is why the technology needs explainability to foster accountability. Explainability is a set of techniques businesses can use to ensure stakeholders understand how AI systems arrive at their outputs.

"Explainability builds trust," Purcell said at the recent Forrester Technology and Innovation Summit in Austin, Texas. "And when people, especially employees, trust AI systems, they're far more likely to use them."

Implementing an explainable AI system will not only help foster use and trust within a business, but also mitigate regulatory risk, Purcell said.

Explainability is a key component of regulatory compliance, particularly with laws like the EU AI Act. Forrester analyst Alla Valente said it's crucial that businesses focus on measures like explainable AI to meet new AI laws and requirements, and not fall short of existing data privacy regulations.

"Make sure that your AI efforts have accountability, responsibility, trust and security," Valente said during the Forrester summit. "Don't look for regulators to set those standards, because that is your absolute minimum."

Purcell said explainable AI will look different depending on the AI model a business is using: predictive, generative or agentic.

Preparing an explainable AI system

There are different types of explainability, including reproducibility, observability, transparency, interpretability and traceability, Purcell said.

Explainability builds trust. And when people, especially employees, trust AI systems, they're far more likely to use them.
Brandon PurcellAnalyst, Forrester Research

For predictive AI models, transparency and interpretability are the best types of explainability to pursue, he said. Transparency means using "glass-box modeling techniques," which let users see into the process, what the model found in the data and how it arrived at its prediction. Transparency would likely be a technique regulators would want to see, especially for high-risk use cases, Purcell said.

Interpretability could be used for lower-risk use cases, such as fraud detection or explaining to a customer why they didn't receive a loan. Partial dependence plots, which demonstrate the impact of specific inputs on the outcome of a predictive AI model, can provide interpretability.

"In predictive AI, explainability is really about the model itself," Purcell said. "It's the one place where you can try to open the hood on the model and see how it's working."

Generative AI models are "inherently opaque," making explainability much more challenging, he said. However, an approach businesses can take is traceability, or documentation of the entire AI system.

For companies partnering with large generative AI vendors such as Google, Anthropic or OpenAI, Purcell said entities like Stanford University's Institute for Human-Centered AI provide a transparency index comparing the different vendors. The generative AI vendors also provide model cards, which include information about the model's performance.

When evaluating model cards, Purcell said businesses should look for the model's intended uses, known limitations, ethical considerations when using the model, training data provenance, how the model was evaluated and model performance metrics.

Lastly, for agentic AI systems, which pursue goals autonomously, Purcell said businesses will need to strive for reproducibility. Reproducibility is an approach that re-creates model outputs using similar inputs.

Agentic AI systems especially will require a significant amount of trust before they're given agency and deployed in the real world, according to Purcell. Similar to self-driving cars, agentic AI systems will need many hours of operation in a simulated environment before actual deployment.

"Agentic systems are going to have to accrue millions of miles before we let them loose in the real world," he said.

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on Risk management and governance

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close