Getty Images/iStockphoto

Solving the AI black box problem through transparency

Ethical AI black box problems complicate user trust in the decision-making of algorithms. As AI looks to the future, experts urge developers to take a glass box approach.

If "black box" sounds insidious, that's because it can be. Experts are constantly warning against the complications of nontransparent AI problems, which can include bias and ethical concerns.

But even with those concerns, black box development is currently the primary method of deep learning modeling. In this approach, millions of data points are inputted into the algorithm, which then identifies correlations between specific data features to produce an output. The process inside the box, however, is mostly self-directed and is generally difficult for data scientists, programmers and users to interpret.

From facial recognition software to credit card services, black box AI is built into many of the types of AI using deep learning that we interact with on a daily basis. Increased scrutiny of the black box model's prevalence has led to a push in researching and developing explainable AI. The field of explainable AI centers around understanding how AI models arrive at their outputs with the goal of creating AI that has sound, traceable explanations for each of its decisions. But the complex associations AI makes between data makes this very difficult.

How to clear the box

The solution to these issues around black box AI is not as easy as cleaning training data sets. Currently, most AI tools are underpinned by neural networks, which are hard to decipher. Trust in the company and its training process is a starting point, but experts have said the real solution to the AI black box problem is shifting to a training approach called glass box or white box AI.

Glass box modeling requires reliable training data that analysts can explain, change and examine in order to build user trust in the ethical decision-making process. When white box AI applications make decisions that are pertinent to humans, it's guaranteed that the algorithm itself can be explained and has gone through rigorous testing to ensure accuracy.

AI … should be reliable, unbiased and robust to be able to handle any errors that happen across the AI solution's lifecycle.
Sankar NarayananChief practice officer, Fractal Analytics

"AI needs to be traceable and explainable. It should be reliable, unbiased and robust to be able to handle any errors that happen across the AI solution's lifecycle," said Sankar Narayanan, chief practice officer at Fractal Analytics Inc., an artificial intelligence company. Explainability is the foundation of ethical AI and by understanding the decision-making process, we can increasingly use artificial intelligence to solve problems and improve society, without the consequences of bias. Even if the AI gets it wrong, we'll know why and can adjust the training data and internal source code.

Current modeling is complicated by a fundamental issue: AI is meant to mimic the way humans process information, but behavioral economics research shows that the thought process of humans is often irrational and unexplainable, Narayanan said.

"We rely on our thought process even if we are not able to rationally explain it, i.e., a black box," he continued.

Despite this reality, choosing the best available input data and monitoring what happens inside the box goes a long way to promoting ethical AI and mitigating concerns about the lack of transparency.

A human touch

One key to successful glass box AI is increased human interaction with the algorithm. Jana Eggers, CEO of Boston-based AI company Nara Logics, said that strictly black box AI reflects both human bias and data bias, which affect the development and implementation of AI. Explainability and transparency begin with context provided by developers to both the algorithm and the users through universal familiarity with training data and strict parameters for the algorithms' calculations and capabilities.

Another step toward dismantling some of the AI black box problems is to analyze the content being inputted and the algorithm's output in order to better understand the decision-making process. Once the process is clear to developers, they can then adjust it to reflect human ethics.

"There are plenty of times when the AI is wrong and the humans are right. You have to prepare for that," Eggers said.

Clarifying black box AI means creating an explainable methodology for human scale using simple, understandable data science that includes outlining the program's decision-making process, including what factors were weighed and how heavily.

"When we think about explanations, we need to think about what is appropriate for a human cognitive scale," said Brian D'Alessandro, director of data science at SparkBeyond, and AI-powered software company. "Most people can fully consume a rule that has five or six different factors in it. Once you get beyond that, the complexity starts to get overwhelming for people."

The two popular testing models differ in explainability

The future of AI and ethics

Recently there's been a lot of discussion about AI bias, ethical concerns and accountability. Vendors, engineers and users can do their part, but these types of problems are hard to spot and stamp out in a black box AI application.

Black box AI complicates the ability for programmers to filter out inappropriate content and measure bias, as developers don't know which parts of the input are weighed and analyzed to create the output.

Sometimes the data is collected in a way that makes it biased and black box functionality creates the risk that these problems could get replicated and magnified, D'Alessandro said.

In one example, Amazon created an AI recruiting tool that analyzed 10 years of applications in order to create a system that automatically identified characteristics of high-performing employees and scored new candidates against those standards. The tool made headlines in 2018 when it was revealed that, due to societal influences such as wage gaps and gender bias in technology jobs, the algorithm favored male applicants.

Now, companies using AI are left searching for ethical guidelines for AI data collection and deployment. In May 2019, the European Union released a standard guideline defining ethical AI use that experts are hailing as a big step toward tackling the black box AI problem.

"The [EU's] Ethics Guidelines for Trustworthy AI is path breaking, not just for Europe, but for the world at large," Narayanan said. "The guidelines will nudge businesses to start to trust the specialists and veer away from generalist services providers that have rebadged themselves as AI companies."

Next Steps

Reduce these forms of AI bias from devs and testers

AI transparency: What is it and why do we need it?

Why transparency in AI matters for businesses

EEOC worries about 'snake oil' products in AI recruiting

Dig Deeper on Machine learning platforms