NicoElNino - Fotolia

Eliminating bias in AI is no easy feat, but fixes do exist

As more companies turn to AI to reduce workplace and recruiting bias, it is important to be aware that the data AI tools draw from may have bias baked in.

HR technology buyers say the idea of artificial intelligence has evolved from cutting-edge to "table stakes." And few HR leaders, industry analysts or product developers would downplay the impact that AI, machine learning and similar technologies have had on workforce planning, performance management and talent acquisition.

Against that backdrop, it's no surprise that many vendors have developed products that use AI to mitigate recruiting and workplace bias. Their tools compile and analyze data so that, in theory, employers will make decisions that mitigate unconscious bias.

But do identifying trends and modeling future actions by themselves constitute solutions? No. Executives, managers and HR need to put the system's conclusions to use to affect the workforce. But even before they face that challenge, organizations face another: What AI produces is only as good as the data it's based on, data scientists say. And because of that, there is the potential for bias in AI.

'Point of view' in the machine

Many users regard AI as something like a "black box" that accepts data input and transforms it into intelligence, said John Harney, co-founder and CTO of DataScava, a New York provider of unstructured data mining solutions. However, most don't understand that bias in AI happens because the data AI uses has biases built into it. Compiling metrics and using them to identify, say, the traits of successful employees isn't a purely an objective process.

The reason is that "bias is another word for point of view," said John Sumser, principal analyst at the consulting firm HRExaminer in San Francisco. "It's not possible to measure without having a point of view. It's not possible to observe without having a point of view. And, so, there is no such thing as data that's free from a point of view."

The data that's available to make decisions is the same data for people as it is for machines. Why would you think a machine is going to be less biased than a person?
Alec LevensonSenior research scientist, USC Center for Effective Organizations

"The data that's available to make decisions is the same data for people as it is for machines," added Alec Levenson, senior research scientist at the University of Southern California's Center for Effective Organizations in Los Angeles. "Why would you think a machine is going to be less biased than a person?"

Bias, Levenson explained, is about using a small amount of data to make judgments about a situation or a person that are inaccurate. "The fundamental problem that we have is that we [use] shorthand to be able to make quick decisions," he said. So, for example, people know that women get pregnant but men don't, then decide it's more difficult for mothers to work than it is for fathers. The fact that women are impacted physically while men are not, he said, is "a simple biological statement."

That statement, however, became problematic when society created a number of norms around it. Not so long ago, for instance, businesses assumed women shouldn't do certain things while they were expecting. Today, most agree it shouldn't be assumed that women will have a more difficult time working while they're pregnant. However, Levenson said, "there's these other things that go along with it." For example, some people will believe that women are more likely to then take time off after having children, and data supports that.

But just because data shows women are more likely to take time off doesn't mean that a particular individual will, Levenson said. And that's where bias comes into play. "It's a factual statement to say that women on average are more likely to not work after having kids for a certain amount of time. But that's also a bias statement when you then apply that to the person sitting in front of you." Nothing about machines make them any less likely to reach a biased conclusion if their source data contains bias in the first place.

That's the kind of trap that makes analysts, data scientists and even technology experts emphasize the need for humans to be involved in both interpreting data and making decisions based on it. "Data has so many inherent biases," said Madhu Modugu, founder and CEO at Leoforce, a recruiting platform provider in Raleigh, N.C. People develop biases based on their perceptions over time, he explained. They make decisions based on those perceptions, which are, in turn, reflected in data. When machines learn based on that data, there's a good chance they'll have those biases as well, he said.

Artificial intelligence, human perspective

How, then, can organizations address bias in AI? The short answer: Pay attention to the basics.

To start, there's data governance, says Sumser. Whenever HR departments launch a new analytics or intelligence program, they inevitably discover that naming conventions, field sizes, workflow designations and the like "are in disarray." Properly organizing and formatting data isn't a simple process, he warns, but it simply can't be done without developing an approach to data governance. "If you're calling apples 'oranges' and oranges 'pears' and all three of the things 'plums' in some cases, you can never find the patterns," he said. "So, the data cleaning and governance step is always the first real step."

Also, keep in mind context. For example, some tools do a good job of assessing candidates without bias. However, the candidate data used by the tool is biased "on the way in," Sumser said. When the system has finished its work, its data is returned to the same process from which it originated. "It's like you have a polluted stream," he said. "You siphon water off to the side and you clean that water until it's really pristine. Then at the end of that process, you dump it back into the polluted stream."

Then there's the human factor. "Everything we're talking about here is trying to assess human behavior and make predictions about human behavior," Levenson said. "And the fundamental problem we have is that it's a very imprecise process." Organizations run into trouble when they act as if AI and data science tools can produce results as precise and predictive as measurements in physical science. "That's where we end up making really bad decisions," he said. "We can make statements about the likelihood that something's going to happen. We can often be more right than we're wrong, so it's still a worthwhile exercise to do the analysis and make predictions. But you've got to be really careful about the actual actions you take on the basis of those analyses and predictions."

Dig Deeper on Core HR administration technology