EEOC hearing on AI bias criticized as one-sided
The U.S. Equal Employment Opportunity Commission held a hearing this week to examine how AI bias can hurt job seekers. But the panel's composition was accused of being unbalanced.
A U.S. Equal Employment Opportunity Commission hearing on AI bias and employment discrimination was all but dismissed as a waste of time by one of its commissioners.
At the start of the hearing, EEOC Commissioner Keith Sonderling noted that the 12 people scheduled to testify were overwhelmingly from the perspective of those concerned about the risk of AI-based hiring systems and "how employers are allegedly implementing them to the detriment of their workers."
Keith SonderlingCommissioner, U.S. Equal Employment Opportunity Commission
It was the commission's first hearing on AI, and it was "curiously missing representation from those who are actually innovating, designing, building and selling these products," according to Sonderling.
"It is equally important to highlight that AI can mitigate the risk of unlawful discrimination and create accessible workplaces that were previously unimaginable," he said.
By claiming the AI bias hearing was itself biased, Sonderling, a Republican appointed to the commission by former President Donald Trump, seemed to speak more from a point of frustration than irony. It's unclear if HR vendors weren't asked to testify or are avoiding testifying. If they are avoiding the EEOC, it might be for a reason. The agency is sharpening its enforcement intentions and is working with the U.S. Department of Justice on an anti-discrimination enforcement plan around using automation in employment decisions.
Those who did testify included computer science and legal experts from the American Civil Liberties Union, Brown University, MIT, Washington University School of Law, and other academic institutions and law firms. About 3,000 people watched the virtual hearing, according to the EEOC.
EEOC Chair Charlotte Burrows said rapid adoption of AI and other automated systems "has truly opened a new frontier in the effort to protect civil rights." She said that "increasingly, automated systems are used in all aspects of employment -- from recruiting, interviewing and hiring to evaluations and promotions, among others."
The goal of the hearing was to raise awareness and educate vendors, employers and others about the use of AI in employment and to bring "enforcement actions when necessary to address violations," Burrows said.
The risk of harm
AI and machine learning "when deployed unthinkingly and without proper guardrails in place will inevitably cause harm," said Suresh Venkatasubramanian, a computer science professor at Brown University, who testified at the hearing.
Venkatasubramanian is also the former assistant director for science and justice in the White House Office of Science and Technology Policy, which developed the Blueprint for an AI Bill of Rights, an ethical framework for AI adoption intended to give federal agencies, regulators and lawmakers a foundation for developing new laws around AI bias.
One of the significant reasons for AI bias "is that people of color are overrepresented in undesirable and unreliable data that often foreclose job opportunities," said ReNika Moore, director of the American Civil Liberties Union's racial justice program.
Those who are Black, Latino and Native American "are disproportionately represented in criminal databases due to a variety of factors, including racial profiling of people of color and harsher outcomes in the criminal legal system," Moore said. She added that Black women, for instance, are more likely to be targeted for eviction.
At this hearing, the lone defender for the HR tech industry was Jordan Crenshaw, vice president of the U.S. Chamber of Commerce's Technology Engagement Center, who urged the EEOC to use caution before regulating.
Crenshaw asked policymakers not to place so many restrictions on private sector data collection that they would "inhibit the ability of developers to improve and deploy AI systems in a more equitable manner."
Don't rush, Chamber advises
Crenshaw urged regulators not to adopt requirements such as third-party auditors "until standardization and best practices have been developed."
"We would caution against agencies viewing the mere use of sophisticated tech tools like AI as suspicious," Crenshaw said.
New York City's law on automated decisions used in employment is due to launch in mid-April. Tech vendors have been providing feedback on the law at the city's hearings, but are fighting over the exact language of the final rule. This law, the first of its kind in the U.S., requires third-party audits to check for AI bias.
Earlier this month, NYC Councilwoman Selvena Brooks-Powers (D-Laurelton), the majority whip, warned of the risk that the city's AI bias law might get watered down. The proposed changes from the NYC Department of Consumer and Worker Protection, which is finalizing the regulation, include narrowly defining technologies covered by the law to those that "fully replace or overrule human decision-making in the hiring process."
The problem, Brooks-Powers said, is that "human decision-making always has some role to play in the hiring process."
"We voted for transparency in the hiring process to support racial equity," Brooks-Powers said. "We voted to make it clear that certain types of hiring tools systemically disadvantage Black and brown people."