putilov_denis - stock.adobe.com
Making AI systems responsible, competitive and functional
Many enterprises have a tough time building fair and trustworthy models. Doing so could be profitable for enterprises, while not doing so could be catastrophic.
As discussions around AI ethics continue in the public and private sectors, enterprises question the profitability of building responsible AI systems.
While the ROI of responsible AI isn't is clear, recent history shows that without the safety rails of governance, AI systems can go wrong and businesses can lose big.
Zillow, an online real estate marketplace, recently lost more than $304 million when the machine learning algorithms it used to estimate home prices overestimated values of homes the company purchased through its now-defunct Zillow Offers program. News reports revealed that Zillow tried to recoup its losses by offloading thousands of those homes to investors, many for below what it paid.
One could argue that the failures of the machine learning algorithms led to a lack of trust between Zillow and its end users -- an issue that the guardrails of responsible AI aim to avoid.
Despite examples like Zillow, experts say enterprises still aren't paying enough attention to responsible AI.
Roadblocks to responsible AI
Part of the problem may be that enterprises look at AI systems through the wrong lens, said Manoj Saxena, executive chairman of the Responsible AI Institute, a nonprofit organization that seeks to advance responsible and trustworthy AI.
Instead of looking at AI systems through a data and model lens, enterprises should focus on how the systems will impact humans, Saxena said during a panel discussion at the ScaleUp:AI conference in New York.
It is also important for those building and working on the AI systems to ask the right questions, said Krishna Gade, founder and CEO of Fiddler, a software company that works with enterprises to deliver transparent AI experiences.
As a former employer of Meta, Gade found that many engineers viewed AI models as a black box and were unable to answer simple questions about why end users see certain stories on their feeds.
Krishna GadeFounder and CEO, Fiddler
"The engineers would shrug their shoulders and say, 'Oh, I don't know, it's just the model,'" Gade said during the same panel discussion. "I think that's the problem that we need to solve: to make sure we understand how these things work. There could be business implications [and] societal implications if you don't do it right."
Responsible AI can be competitive and functional
However, it is not enough for an AI system to be responsible; it must also be competitive.
One of the first ways to do this is making board members and CEOs understand the profitability of responsible AI, Saxena said.
When enterprises invest in responsible AI, this builds customers' trust and loyalty, and it keeps them coming back, which translates into profitability.
Secondly, enterprises need to approach responsible AI in a holistic way, Saxena said.
Taking a holistic approach means IT, compliance and audit groups work together. It is not enough for those working in IT to just build a model. They need to work with the compliance and audit group to understand how the model is fair and to ensure it complies with governance rules.
Another way of making responsible AI competitive is to view functional AI as responsible AI. Enterprises must not only consider the models or the software, but also the organizational design behind the workflow, as well as the risks they are willing to accept.
"Deploying AI responsibly is not just a function of tweaking a model," said Jared Dunnmon, technical director of the AI/ML portfolio at the Defense Innovation Unit within the U.S. Department of Defense. "It's a function of how you actually build the workflow around the system."