metamorworks - stock.adobe.com

The long-term answer to fixing bias in AI systems

The technology is exploding with new developments daily. However, problems with training data can lead to bias. Fixing it requires retraining the data and educating users.

A new AI system or tool pops up every day.

AI systems are more popular than ever -- and smarter.

From large language models such as GPT-3 to text-to-image models like Dall-E and, most recently, text-to-video systems like Imagen Video -- a system Google introduced on Oct. 5 that takes a text description and generates video -- AI systems have also become more sophisticated.

However, sophistication comes at a cost, according to Chirag Shah, associate professor in the Information School at the University of Washington.

While the systems' creators have tried to make the systems smart, they haven't done the same in making them fair and equitable, Shah said. Problems with how the systems learn and the data they learn on often cause bias to creep in.

In this Q&A, Shah discusses approaches to fixing bias in AI technology.

What are some of the ways to fix the bias problems in AI systems?

Chirag Shah: Are you looking for a quick fix? Those kinds of things can be done relatively quickly. But that doesn't really solve the underlying problem.

For instance, if your search results are biased, you could actually detect that and ... instead of providing the original results, you shuffle them in a way that provides more diversity, more fairness. That kind of addresses the issue. But it doesn't change the fact that the underlying system is still unfair, and it is still biased. It means that you're dependent now on this additional layer of checking and undoing some of the things. If somebody wanted to game the system, they can easily do that, and we've seen this. This is not a long-term solution.

Chirag Shah, associate professor, Information School at the University of WashingtonChirag Shah

Some of these [long-term fix] recommendations are hard. For instance, one way these systems get biased is they're obviously being run by for-profit organizations. The usual players are Google, Facebook and Amazon. They are banking on their algorithms trying to optimize user engagement, which on the surface seems like a good idea. The problem is, people don't engage with things just because they are good or relevant. More often, they engage with things because the content has certain kinds of emotions, like fear or hatred, or certain kinds of conspiracy.

Unfortunately, this focus on engagement is problematic. It's primarily because an average user engages with things that are often not verified, but are entertaining. The algorithms essentially end up learning that, OK, that's a good thing to do. This creates a vicious cycle.

A longer-term solution is to start breaking the cycle. That needs to happen from both sides. It needs to happen from these services, the tech companies that are targeting for higher engagement. They need to start changing their formula for how they consider engagement or how they optimize their algorithms for something other than engagement.

We also need to do things from the user side because these tech companies are going to argue that, 'Hey, we're only giving people what they want. It's not our fault that people want to click a lot more on conspiracy theories.'
Chirag ShahAssociate professor, Information School at the University of Washington

We also need to do things from the user side because these tech companies are going to argue that, 'Hey, we're only giving people what they want. It's not our fault that people want to click a lot more on conspiracy theories. We're simply surfacing those things.' We need to start doing more from the user side, which is user education.

These are not quick fixes. This is, essentially, talking about changing user behavior -- human behavior. That's not going to happen overnight.

How willing are vendors to take the long route in fixing the problems with bias in AI systems?

Shah: They don't have a clear incentive to change their formula for engagement or have their algorithms not optimize on engagement, but rather on authoritativeness or authenticity, or a quality of information or sources. The only way -- or the main way -- they will be compelled to do that is through regulation.

By regulation, I do mean things coming from different government agencies that have the authority to actually impose fines if the businesses don't comply. There has to be some 'teeth in this' policies and, you know, regulations.

There are actually AI-related regulations that the European Union came up with last year. And then the FTC [Federal Trade Commission] here followed, but our side of the policy is not as strong.

I think we need regulations that recognize that anytime an algorithm mediates information being presented to the user, it is equivalent to that mediator actually producing [the information] because they dictate who sees what in which order, and that has a significant impact. So we're nowhere close to that.

Without the proper incentives, will bias in AI systems get worse as more are created?

Shah: It depends. The question is, [are the systems] what we want? This is where some of my colleagues and I would argue that, at least in some of these cases, we have gone overboard -- we have crossed the line already. We have gotten too excited about what technology could do. We're not asking enough for what technology should do.

There are plenty of these cases where you question like, who's asking for this? There are more important problems in the world to solve. Why are we not putting our resources to those things? Why are we not directing our resources to that? So yes, I think that's a bigger question here.

Editor's note: This interview has been edited for clarity and conciseness.

Dig Deeper on AI technologies