Cornell researchers call for AI transparency in automated hiring

Researchers at Cornell University want vendors to disclose how automated hiring systems are making hiring recommendations. They say it's the only way to know they're fair.

Cornell University is becoming a hotbed of warning about automated hiring systems. In two separate papers, researchers have given the systems considerable scrutiny. Both papers cite problems with AI transparency, or the ability to explain how an AI system reaches a conclusion.

Vendors are selling automated hiring systems partly as a remedy to human bias. But they also argue they can speed up the hiring process and select applicants who will make good employees.

Manish Raghavan, a computer science doctoral student at Cornell who led the most recent study, questions vendors' claims. If AI is doing a better job than hiring managers, "how do we know that's the case or when will we know that that's the case?" he said.

A major thrust of the research is the need for AI transparency. That's not only needed for the buyers of automated hiring systems, but for job applicants as well.

At Cornell, Raghavan knows students who take AI-enabled tests as part of a job application. "One common complaint that I've heard is that it just viscerally feels upsetting to have to perform for a robot," he said.

Manish Raghavan, a doctoral student in computer science at Cornell UniversityManish Raghavan

A job applicant may have to install an app to film a video interview, play a game that may measure cognitive ability or take a psychometric test that can be used to measure intelligence and personality.

"This sort of feels like they're forcing you [the job applicant] to invest extra effort, but they're actually investing less effort into you," Raghavan said. Rejected applicants won't know why they were rejected, the standards used to measure their performance, or how they can improve, he said.

Nascent research, lack of regulation

The paper, "Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices," is the work of a multidisciplinary team of computer scientists, as well as those with legal and sociological expertise. It argues that HR vendors are not providing insights into automated hiring systems.

One common complaint that I've heard is that it just viscerally feels upsetting to have to perform for a robot.
Manish RaghavanDoctoral student in computer science, AI researcher, Cornell University

The researchers looked at the public claims of nearly 20 vendors that sell these systems. Many are startups, although some have been around for more than a decade. They argue that vendors are taking nascent research and translating it into practice "at sort of breakneck pace," Raghavan said. They're able to do so because of a lack of regulation.

Vendors can produce data from automated hiring systems that shows how their systems perform in helping achieve diversity, Raghavan said. "Their diversity numbers are quite good," but they can cherry-pick what data they release, he said. Nonetheless, "it also feels like there is some value being added here, and their clients seem fairly happy with the results."

But there are two levels of transparency that Raghavan would like to see improve. First, he suggested vendors release internal studies that show the validity of their assessments. The data should include how often vendors are running into issues of disparate impact, which refers to a U.S. Equal Employment Opportunity Commission formula for determining if hiring is having a discriminatory impact on a protected group.

A second step for AI transparency involves having third-party independent researchers do some of their own analysis.

Vendors argue that AI systems do a better job than humans in reducing bias. But researchers see a risk that they could embed certain biases against a group of people that won't be easily discovered unless there's an understanding for how these systems work.

One problem often cited is that an AI-enabled system can help improve diversity but still discriminate against certain groups or people. New York University researchers recently noted that most of the AI code today is being written by young white males, who many encode their biases.

Ask about the 'magic fairy dust'

Ben Eubanks, principal analyst at Lighthouse Research & Advisory, believes the Cornell paper should be on every HR manager's reading list, "not necessarily because it should scare them away but because it should encourage them to ask more questions about the magic fairy dust behind some technology claims."

"Hiring is and has always been full of bias," said Eubanks, who studies AI use in HR. "Algorithms are subject to some of those same constraints, but they can also offer ways to mitigate some of the very real human bias in the process."

But the motivation for employers may be different, Eubanks said.

"Employers adopting these technologies may be more concerned initially with the outcomes -- be it faster hiring, cheaper hiring, or longer retention rates -- than about the algorithm actually preventing or mitigating bias," Eubanks said. That's what HR managers will likely be rewarded on.

In a separate paper, Ifeoma Ajunwa, assistant professor of labor relations, law and history at Cornell University, argued for independent audits and compulsory data retention in her recently published "Automated Employment Discrimination."

Ajunwa's paper raises problems with automated hiring, including systems that "discreetly eliminate applicants from protected categories without retaining a record." 

AI transparency adds confidence

Still, in an interview, Cornell's Raghavan was even-handed about using AI and didn't warn users away from automated hiring systems. He can see use cases but believes there is good reason for caution.

"I think what we can agree on is that the more transparency there is, the easier it will be for us to determine when is or is not the right time or the right place to be using these systems," Raghavan said.

"A lot of these companies and vendors seem well-intentioned -- they think what they're doing is actually very good for the world," he said. "It's in their interest to have people be confident in their practices."

Dig Deeper on Talent management