The Dr. Jekyll and Mr. Hyde of AI and DEI

AI tools can further bias and discrimination, but such tools also have the potential to support fairness and inclusivity. Which side will prevail?

Despite headlines that suggest otherwise, many business and IT leaders are still committed to creating inclusive and just workplaces. Some are turning to AI for help.

Much of the diversity, equity, and inclusion (DEI) practitioner's job involves gathering and analyzing information on the employee experience within an organization. Increasingly, businesses are using AI technologies to better understand the complex data sets that can help them make a difference.

AI systems can reduce bias and discrimination when users are upskilled to use them properly, said CV Viverito, director analyst of DEI at Gartner. For example, HR professionals who interact with AI should be taught to apply inclusive prompts. It's also important to include humans in the process to monitor the decisions that AI is making.

"If you're doing that right, you'll end up mitigating two sets of biases -- yours and AI's -- and hopefully getting to a more inclusive outcome," Viverito said.

Like any technology, however, AI can further bias and discrimination when it should be eliminating it. To prevent this from happening, leaders need to be aware of the downsides of using AI for DEI, not just the benefits.

Common uses for AI in inclusive talent acquisition

As with other organizational processes, the use of generative AI and other types of AI tools in the recruitment process, including its use as a DEI tool, is a rapidly developing area.

There are several ways that organizations are using AI to help support a fair and equitable talent acquisition process, Viverito said. Those uses include the following:

  • Candidate ranking and selection.
  • Summarizing candidate skills and experience.
  • Reviewing job descriptions to ensure they feature inclusive language.

Ways that AI can add to bias and discrimination

No technology is without its downsides and dangers. With organizations handing over so many life-changing decisions to AI, business and IT leaders need to be especially thoughtful since algorithms can worsen discrimination. Here are some of those dangers.

Reproduce existing bias

AI can be taught to reproduce the biases that already exist in an organization, and that compromises DEI.

This happens when the AI is instructed to seek out job candidates that are much like the people a company already employs, said Serena Huang, author of The Inclusion Equation: Leveraging Data and AI for Organizational Diversity and Well-Being and founder of Data With Serena, a data and AI consulting firm based in Chicago.

AI-run video assessments are one example where that danger is clear, Huang said. As part of its screening process, a company may require job candidates to attend a video interview that is entirely conducted by generative AI. The system may be trained to evaluate the following:

  • What schools the candidates attended.
  • What constitutes a good answer versus a poor one.
  • Personality traits that would make them a good fit for the organization.

It can be problematic when these systems are also trained to compare immediate candidates with previous successful hires, Huang said. If most of them are white men who graduated from Harvard, for example, the AI will prioritize candidates who fit this profile.

"It grades people who don't fit that mold as 'not great' hires, so they don't even make it to talking to a human at all," Huang said.

This is why organizational leaders should conduct regular, human-driven audits -- on a weekly, monthly, or quarterly basis depending on the organization's recruiting volume -- to examine important aspects of using AI responsibly and fairly, Huang said. These factors include the following:

  • The decisions the AI is making.
  • Candidate profiles that AI tools are filtering out of the recruiting process.
  • How the data compares to the hiring decisions that were made when the recruiting process didn't incorporate AI.

Business and HR leaders often get caught up in making the recruiting process more efficient at the risk of perpetuating bias and discrimination.

"We forget to check if the promise that we made around de-biasing isn't being fulfilled," Huang said.

Avoiding that bias and properly configuring AI systems takes work, she said.

That requires gathering feedback and input from a diverse range of voices.

Take a narrow view of people

Another problem with teaching AI to rank candidates based on the schools they attended and the skills they've developed as a result is that it may not consider how those individuals got there.

When AI systems view applicants through a narrow lens, businesses risk overlooking strong performers, said Rohini Anand, a strategic DEI advisor and author of Leading Global Diversity, Equity and Inclusion: A Guide for Systemic Change in Multinational Organizations.

For example, one candidate may have attended an Ivy League institution thanks to their socioeconomic standing, Anand said. Another may possess the same skills but struggled to put themselves through college.

"Organizations want individuals who are resilient, who are hardworking, who have really overcome adversity," Anand said. "[Ignoring that means] you lose out on that nuance of the whole person."

Remove human decision-making

Another potential danger is when companies give their AI too much credit and give AI tools the final word.

For example, a hiring manager is tasked with reviewing and ranking five candidates based on whether they fit the position in question, Viverito said. The AI is asked to do the same. In the end, the rankings from both sources are completely different.

In this situation, there is a tendency for people to place more faith in the AI's choices, they said. The thinking here is that the AI tool must have checked for human biases, so its decisions are probably better. But this isn't always the case.

"What happened is the AI tool just produced different biases than the human did, which is entirely possible," Viverito said. "That would be bad, because then you're just choosing from a set of different biases."

Business and HR leaders should think of AI as a "decision informer," and not a decision-maker, Viverito said. For example, the system may give lower rankings to candidates with gaps in their career history, when some of these individuals may be ideal for the job. It's possible they weren't working only because they were caring for an ailing parent or newborn child.

Or they just faced a tough job market.

AI systems that offer interpretability and explainability are key.

Organizations should work with vendors that offer features that explain why the AI made certain decisions, Viverito said. This enables users to flag the results and retrain the AI to stop arriving at biased decisions.

"If you're using a model that can be quickly retrained and you're upskilling your people [on how] to use it, you'll get the positive outcomes," they said. "If not, you may get negative outcomes."

Create biased history

AI-powered tools used for generating meeting summaries have the potential to discriminate.

For example, if the meeting is being conducted in English and some participants have foreign accents, it may misquote them. Or it will misgender people because it bases its decisions on things like names or depth of voice.

Humans must be involved in producing the final edit of the meeting summary before sending it out, Viverito said. The AI tool should also not assume to know someone's gender if they haven't offered their pronouns.

How AI can support DEI

Despite the dangers of AI, there is much promise in its use to support DEI efforts. Here are a few.

Identify DEI pain points

When the right guardrails are in place, AI can facilitate the data analysis DEI practitioners need to perform to identify the barriers that are compromising diversity, equity and inclusion in their organizations, Anand said. AI tools can be very helpful in analyzing disparate data sets to paint a clear picture of what is happening within a company.

"AI is able to really bring all this together in a comprehensive way and do that work for us," Anand said. "The scale is an incredible opportunity. The speed is an incredible opportunity."

Supports inclusive candidate screening

When hiring managers base candidate screening solely on skills and experience, their likelihood of selecting a more diverse talent pool increases, Huang said.

Recruitment processes that use machine learning algorithms designed to remove data that contributes to bias can be helpful, she said. For example, data such as candidate names or universities tend to support biases around gender and homogenous cultural fit.  

Improve performance feedback

When configured correctly, AI systems can help address unfair practices around performance.

Some organizations are using AI to successfully address issues related to career advancement and pay inequality, according to "The Role of Data and Artificial Intelligence in Driving Diversity, Equity, and Inclusion," a report published in April 2022 by the IEEE Computer Society, a research and education institution based in Los Alamitos, Calif.

"For instance, human resource managers and top executives often have unconscious ideas and feelings about an employee's readiness for promotion," the report stated. "AI algorithms can remove such bias and base decisions on test skills, aptitudes and other factors that correlate with readiness for a position an employee is seeking."

Considering performance apart from color or gender, for example, is difficult -- if not impossible -- for humans. This is where algorithms can excel, provided they are designed properly and with the right inputs.

Still, AI's role in improving employee performance feedback is a less-common use case, but one that has much potential, Viverito said. Equitable representation tends to diminish at the top of organizations. This puts a spotlight on the promotions process and whether performance review feedback is, indeed, guiding employees toward the skills and behaviors they must adopt to get promoted.

AI tools already exist that can evaluate performance review text to determine whether the feedback the manager gives is actionable or not, Viverito said.

"That, to me, is interesting because unconsciously, is that manager giving more actionable feedback to employees that are similar just because of the sameness bias that creeps in?" they said.

Responsible AI can help leaders and managers provide actionable guidance to employees on how they can progress in their careers, Viverito said. Then there's the potential to create a fairer process where more people have a better and more equitable opportunity to advance at the same rate.

Carolyn Heinze is a Paris-based freelance writer. She covers several technology and business areas, including HR software and sustainability.

Dig Deeper on Social and human impact