Brian Jackson - Fotolia

AI in recruiting may be above HR's head

Video job interviews are powerful tools. HireVue's AI algorithm can watch a candidate without losing focus and notice things a hiring manager may not see.

AI-enabled video job interviews can do everything but undress you. It can analyze your facial expressions, what you say -- and even how you say it. A mere flicker of an expression informs its analysis, as can the inflections of your voice.

This AI in recruiting technology is part of HireVue's video interviewing system. Its AI-scoring system ranks the best job candidates based on its analysis of the interview, which could put candidates in line for a job or a rejection letter.

HireVue's AI "notices everything that happens," said Loren Larsen, CTO of HireVue Inc. On one level, the system looks for many of the same details a human interviewer looks for -- but it does so in exacting detail.

Loren Larsen, CTO HireVue, a video interview platformLoren Larsen

For HR departments, AI in recruiting could transform the job. Unlike a human conducting a job interview, the machine does not lose focus. But it also poses new problems: For HR managers, the tools may be little more than black boxes, said Ben Eubanks, analyst at Lighthouse Research & Advisory.

Startup AI vendors promise to help employers find the best culture fit or the right next hire, "but very few of them can back that up with either science, proven data, or some sort of level of transparency where I feel comfortable enough to recommend them to a buyer," he said.

A career-ending risk for HR

The use of AI in recruiting is still relatively new. System developers, such as HireVue, claim their algorithms can do a better job than humans in identifying the best candidates out of hundreds or even thousands of applications.

But HR managers face serious career-ending risks if they don't understand what's going on inside these products, especially when they're used to making critical decisions that could affect people's lives. AI algorithms can operate in a black box, which means there's little visibility into how the algorithm arrived at a recommendation. HR managers need to know all the inputs or signals the algorithm is using as well as how they are weighted, said Eubanks, the author of a newly published book, Artificial Intelligence for HR.

If a candidate-ranking algorithm makes inaccurate recommendations, vendors might get fired, but HR managers could lose their jobs, Eubanks said. Businesses have to insist on transparency from vendors, and to "never accept that black box answer," he said.

The risks and uncertainties about the technology can create trust issues, which is something that HireVue is trying to alleviate. The company has some advantages, something Eubanks noted. It is a 15-year-old firm that began with a standard video interview platform and has been working with AI for about five years now. The company works with high-profile customers, including Hilton and Unilever.

HireVue also recently formed an "expert advisory board" of technical experts, academics and others, to review algorithms and methodologies. "They are making sure that the methods we're using are correct and state of the art," Larsen said.

But only about 15% to 20% of HireVue's customers use its AI-enabled system. One reason is AI models need a lot of data to be trained for a specific role, and that's custom work that involves time and money. The most common reason is HireVue customers don't have enough people in a specific role to train to the algorithm on, Larsen said. Discovering what makes the best employee takes a lot of data, he said.

HireVue, however, has learned enough about customizing AI algorithms for specific job roles that, in October, it announced a new product offering: prebuilt assessments. The company's prebuilt AI models don't need the same volume of training data and are available for standard roles such as call center representative, sales representative, retail associate and software developer.

Trust is another factor in AI adoption in HR. Companies continue to question whether the AI models can be trusted in HR applications generally.

"A lot of the concerns about the correct use of AI are very grounded," Larsen said. "AI is still new and internally [within the HR profession], there's still debate about the role of it."

Effort to improve confidence in AI-enabled hiring

Today, AI systems are broadly used to analyze text-based information available on a candidate, but video can also be part of that assessment. The AI models such as those from HireVue look for many of the same things a human may notice in a face-to-face interview. But AI-enabled video is a uniquely powerful tool.

HireVue's system has the capability of running a frame-by-frame analysis. It can spot flickers of emotions -- sometimes called microexpressions -- that may happen too fast and may be too subtle for a human to notice. The system can also match voice inflections with what's being said.

Here's an example: A candidate says, "I love my boss, who is great." The words seem right, but the candidate's voice goes flat and weakens at the word "great." The algorithm makes a note of it, according to Larsen. The candidate may also show a flicker of contempt at something, perhaps too quick for a human to notice. But the algorithm catches it.

We're not making a decision on one flicker of the face.
Loren LarsenCTO, HireVue

This system isn't out to exclude people who aren't perfect, Larsen said. Instead, if the job is customer-facing, it focuses on important general characteristics, such as: "Are they friendly? Are they enthusiastic? Do they demonstrate passion when they answer a question? Will my customers engage with them?" Larsen said.

"We're not making a decision on one flicker of the face," he said, or "You smiled at the wrong time: You're not going to get the job."

Every job has different attributes that are important in a video assessment. For instance, in a video interview for a more technical role, word choice may be highlighted when evaluating a candidate. "Do they use the technical language that you would expect someone with that experience to use?" Larsen said.

The video analysis also isn't the only thing that makes or breaks a candidate, as the resume and game-type tests may be part of the overall assessment, according to Larsen.

The goal, generally, of AI in recruiting is to help address big problems for HR managers. This includes improving the diversity of their workforces, in part, by reducing human bias, as well as more efficient sorting of hundreds to thousands of job applications.

The problem with vetting AI systems

The problem that HR departments face is in vetting AI-enabled systems, which can be difficult to audit, according to Anupam Datta, professor of electrical and computer engineering at Carnegie Mellon's College of Engineering.

Explainability tools, which are used to clarify how these systems behave and arrive at recommendations, are still in a nascent stage. Datta believes they will be available not too far in the future.

Third-party services to audit AI systems, in much the same way a financial auditor with fiduciary responsibility checks financial records, still don't exist. "Something like that has to emerge in this area as well," said Datta, who heads up the Accountable Systems Lab at Carnegie Mellon, which is working on problems posed by decision-making systems.

Vendors also have a responsibility to produce documentation about the types of testing they are doing, and the factors that are driving candidate rankings, Datta said.

HR managers can ask questions to investigate AI systems and determine what are the important factors driving the ranking of candidates. For example, if the system uses geography of a recent experience and role at an employer to weigh candidates, that could be suspicious. Location is sometimes correlated with race in candidate selection. "That's a basic kind of sanity check that we would want to be able to do," Datta said.

Standards governing the use of AI systems are emerging, and Datta pointed to the Association for Computing Machinery, which has a set of principles for algorithmic transparency and accountability that include auditability for AI algorithms, as well as the possibility of making test results public.

"It's extremely important for these systems to be very carefully vetted and examined and studied before they're moved into production because of the consequences of bias," Datta said. AI algorithms are trained on historical data, and the data can reflect bias against certain groups, he said.

"The onus should not be on the HR managers to do that testing themselves," Datta said.

HireVue's Larsen believes that, ultimately, people will become convinced that AI systems can do a better job than people in assessing candidates. "We know that we can be fairer and be a better predictor of talent than people in most cases," he said.

Dig Deeper on Talent management