Trust but verify: Digging into audits for AI algorithm bias
TechTarget editors discuss efforts to remove bias from AI and why algorithm auditing could be a fix -- provided challenges related to trust and standardization can be overcome.
By now, it is well known that AI sometimes gives results and recommendations that are unfair to minorities. That much is no longer in dispute. What remains in doubt is whether developers will ever get the AI beast under control -- or can be trusted to try.
AI algorithm bias has been found in a wide range of applications, from software that determines credit worthiness to facial recognition technology for spotting criminals to automated hiring tools.
So far, the software industry has not done a great job of policing itself or finding ways to reassure users by providing the transparency aimed for in explainable AI.
In fact, experts disagree on the best means toward the largely agreed-upon end of eliminating AI algorithm bias. Is it good enough to test the output of black box AI or do you need to know how it works? What's the point of showing the inner workings if most people won't know what they're looking at? Once you eliminate bias, who will stay vigilant and guard against software updates letting bias back in?
In this podcast, Brian McKenna, business applications editor at ComputerWeekly, and I discuss some of the possible answers I learned about while researching a story on algorithmic auditing. We also touch on broader issues of diversity and inclusion in technology.
A few things became clear: AI is prone to bias, fixes for it are immature and help may be on the way. Auditing AI periodically for bias could be the most practical solution for both buyers and sellers. Unfortunately, audit processes are still being developed -- hotly debated, in fact.
Where to learn more about AI algorithm bias
TechTarget journalists have covered AI algorithm bias in depth for several years, in dozens of articles. I found two to be especially helpful. George Lawton's comprehensive feature on racism in AI systems draws on a wide range of sources, including interviews with people of color who have been adversely affected by the technology. Sebastian Klovig Skelton's deep dive into auditing for algorithmic discrimination was one of the first articles on algorithm auditing in the business press.
Academics and think tanks have been all over the topic. Several writers stand out for their incisiveness and willingness to take on algorithm auditing providers and AI vendors. Perhaps the most visible is Alex Engler of The Brookings Institution. His report for Brookings makes a strong argument, backed up with details from two audits of AI hiring tools, for not fully trusting algorithm auditing in its current form, and why federal regulation might be needed to get AI algorithm bias under control.
A paper by the Northeastern University auditors who examined automated hiring software from Pymetrics gives an inside view of how one vendor strives to mitigate bias and of one approach to auditing that could be a path forward. The cooperative auditing process proposed by the authors is meant to balance auditor independence with sufficient transparency to examine software code without threatening intellectual property.
Vendors like Pymetrics make a credible case that they are vigilant in rooting out AI bias. The real challenge for them might be getting their customers and government regulators to believe them. That's why critics and vendors alike say the priority should be to develop algorithm auditing that is standardized, trustworthy and independent.
Perhaps the best advice about handling the risk of AI algorithm bias comes from Ben Eubanks, an HR technology analyst who cites a quote often attributed to U.S. President Ronald Reagan during nuclear arms talks: Trust but verify.
To hear the podcast, click on the link above.