How to address the hidden risks of algorithmic decision making

From biased customer interactions to harmful treatment recommendations, some risks of automated decisions have yet to be resolved. A Wharton School professor has a way out.

More and more of our society's decisions are being turned over to algorithms. In some ways, this represents progress. It makes businesses more data-driven and less susceptible to individual biases. In other ways, algorithmic decision making presents risks. Training data sets can become biased themselves and the black box nature of most AI applications makes it hard to know when a product or service operates ethically or exploitatively.

A new book by Kartik Hosanagar, a professor at the Wharton School at the University of Pennsylvania, explores some of these issues and looks at possible responses. In this interview, we discuss some of the highlights of his book, A Human's Guide to Machine Intelligence, including his Algorithmic Bill of Rights, which would give enterprises a framework to help them guide their AI projects in ethically responsible ways.

What made you want to write this book?

Kartik Hosanagar: Increasingly, we see that algorithms are driving a lot of our decisions in our personal and professional life. If you look at your personal life, algorithms are recommending what products to buy, on Netflix, or Facebook, and YouTube, algorithms are suggesting what media to buy, and they're having a very significant impact our choices. For example, on Amazon, over a third of our choices are driven by algorithms. On Netflix, over 80% of our viewing activity is driven by algorithmic recommendations. They are also very significant at the workplace, particularly if you look at, let's say, recruiting. Algorithms are used to screen job applications and resumes. In banking, algorithms figure out what mortgage applications to approve. In marketing, of course, algorithms are driving advertising activity. They're even making life and death decisions, like with doctors figuring out what treatment options to pursue.

What level of awareness do you see in the general public about understanding how algorithmic decision making is affecting their lives?

Hosanagar: I feel it's mostly quite poor in terms of understanding both how they impact our lives, and also just how they work. In terms of how it impacts our lives, I think in informal surveys and conversations, I see people appreciate that algorithms are around us, but they don't fully understand how deeply entrenched they are. And a lot of people believe that algorithms make some recommendations, and then I do what I want. But actually, the data suggests that they drive so much of our decision making. And when I share some of the numbers, it often surprises people.

The other thing is that I find people also have the wrong mental models about how these algorithms work. We see this, for example, when they hear about problems like a gender bias in a recruiting algorithm. Sometimes people ask me "What kind of an engineer would program this," or "Who's programming this kind of bias?" And it kind of shows that we don't have the right mental models for how they work or the appreciation for how the data drives some of the biases.

And I think the other piece is that if you look at the media, a lot of the conversation, lately especially, it's around fear mongering and about problems here. And I wanted to address the problems head-on, explain, yes, they are there, explain why they happen, and also show the way forward and have a conversation more around the solutions.

What responsibility do businesses and corporations that are putting these models out into the world have to make sure that they're running fairly and free of bias, and operating constructively?

Hosanagar: I think businesses have a huge role to play here. And also, if they are not careful, then there is this risk that there might be a consumer and potentially even regulator backlash. And if businesses take the right set of steps proactively, then I think we will be in a better place for the businesses themselves. I think we wouldn't want very heavy regulation in the privacy area in the U.S. And one way to prevent that is to take action proactively around transparency -- what data we have about consumers, how we use it, who we have shared it with, letting consumers access some of that information. There's also the issue of bias. These algorithms are operating in socially consequential areas, like credit approval, or recruiting or news curation. Having those algorithms be audited and revealing the results of those audits, I think is important so that people can trust these systems.

In the book, you talk about the Algorithmic Bill of Rights. Can you take me through that?

Hosanagar: So the Algorithmic Bill of Rights is essentially what I say are certain rights that consumers can and should expect. A few pillars of those rights are around transparency and control. So with regard to transparency, sometimes we don't even know that an algorithm is making decisions behind the scenes, without letting us know that the decision was automated. We should, as consumers, expect transparency with regard to what data we'll use for those decisions, explanations regarding what are the biggest or most important factors behind the decisions. And when people have concerns about privacy, again, it helps.

And in terms of control, I think it's also this idea that sometimes, the goal of product designers is to automate so much that consumers can be very passive, or users can be very passive when they use technology, they don't have to think much. But I think actually, you should account for a way for a human to play a role, and you should keep a human in the loop. And so control is another pillar of my Bill of Rights.

A simple example of that would be two years back, you know, when the fake news issue was happening on Facebook, people were seeing false news in their news feed, [and] some people realized that, but there was nothing they could do to alert Facebook. Today, two years later, with just two clicks, consumers can alert Facebook and say this content is offensive, or this content is fake, and then human moderators at Facebook can take action. And that's one example of how keeping a human in the loop, having users have some control over how to make decisions for them is helpful.

Do you feel this Bill of Rights should be something that is formalized and tech companies all sign on to a specific document pledging to do certain things, or is it just about general principles that you'd like to see people operate under?

Hosanagar: I would like to see it formalized. I would like to see, ideally, the large tech companies kind of adopt something like this and say, "Okay, we're going to follow these approaches, and we'll audit it, and we will release audit reports," as a way to win back consumer trust. An extremely formal, or formalized approach is regulation. And you could even imagine an algorithmic safety board that is actually providing oversight of companies and looking into these kinds of things. That's another way it could be done. One concern with regulation is it's easy for regulation to also go overboard, so we've got to be careful of that. But I think a good start is for companies to kind of step up and say, "Okay, here are some actions we're going to take on our own."

In the course of doing the research for this book, did you come away feeling more optimistic or concerned about the future? In what direction do you see AI going?

Hosanagar: I'm actually optimistic. My overall take is that these problems are very much solvable as long as we recognize them and tackle them head-on. So, like I was saying, there's a lot of fear mongering, and my hope is that I'm saying, "Look, if we all take action, then this is very solvable." I'm also working with companies and trying to encourage them to adopt certain practices like auditing and informing consumers and helping them kind of take more ownership here. So my overall take is optimistic. But it's not a guarantee that we'll solve it.

Dig Deeper on AI business strategies