sdecoret - stock.adobe.com

Humans in the loop won't prevent AI disasters, experts say

National Academies workshops reveal skepticism that humans in the loop could prevent an AI mishap. Experts highlight human flaws and AI's disarming nature as concerns.

Experts are skeptical that the often-touted idea of keeping "humans in the loop" in AI deployment is enough to save humanity from an AI catastrophe or a business from a discrimination claim. Humans might be more flawed and prone to misjudgments than the systems they oversee.

That was one upshot of a recent series of workshops held by the National Academies of Sciences, Engineering and Medicine to examine human and organizational risk factors in AI management. The forum brought together industry experts from companies such as Google and Microsoft with academic and public policy organizations focused on AI.

An idea in AI risk management is that having a human in the loop to be involved in or supervise any decision making will mitigate potential pitfalls. However, humans can be unpredictable, harbor uncertain motivations and overestimate their skills in responding to AI system anomalies.

According to panellists, AI systems can also be disarming and make managers of these systems less attentive than they need to be. That creates a problem that needs to be solved, namely assessing and measuring how humans in the loop interact with AI systems.

Tara Behrend, a professor at Michigan State University's School of Human Resources and Labor Relations, commented on an audience member's observation that, "AI harms range from Armageddon to hurting my feelings." She said the remark might have been tongue-in-cheek, "but hurting a person's feelings can also lead to Armageddon when that person starts making decisions and reacting to the AI by sabotaging, avoiding or working around it."

Do people project some human properties onto those systems?
Laura WeidingerSenior research scientist, Google DeepMind

Laura Weidinger, a senior research scientist at Google DeepMind, said the current wave of AI systems is anthropomorphic, human-like and uses natural language, creating issues.

"Do people project some human properties onto those systems?" Weidinger said. "Do they trust them more?"

"This is different from previous AI systems and starts creating risks that seem really fuzzy and human-like," she said. It's also important to find ways to measure these types of questions, she added.

Hanna Wallach, partner research manager at Microsoft Research, underscored Weidinger's point about researching difficult-to-answer questions. But she also chastised the academic community, which has "a little bit of an attitude that unless we know how to measure something perfectly, let's not bother trying.

"As somebody who works in industry, doing something to gain information about what is going on is better than doing nothing," Wallach added. "I would much rather people try to measure these sorts of fuzzy, difficult-to-measure concepts than do nothing."

That AI creates issues in every aspect of life was a persistent theme at the workshops.

Diana Burley, vice provost for research and innovation at American University, emphasized the unpredictable nature of human behavior when interacting with technology. She noted that people rarely use technology as expected, which can lead to unintended consequences.

Alexandra Givens, CEO of the Center for Democracy and Technology, said much more is needed than having humans in the loop. "The human in the loop cannot become a fig leaf that actually stops accountability for responsible AI tool design," she said.

Marc Rotenberg, executive director and founder of the Center for AI and Digital Policy, said an AI system should be deployed only if it is human-centric and trustworthy.

If AI systems have outcomes that can't be proven, reproduced, traced or contested, "they simply should not be used to make decisions about people," he said.

Patrick Thibodeau is an editor at large for TechTarget Editorial who covers HCM and ERP technologies. He's worked for more than two decades as an enterprise IT reporter.

Dig Deeper on Talent management