Approaches for embedding human ethics in AI systems
Instilling ethics in AI systems is a low priority for CIOs aiming to harness the power of machine intelligence. That's a mistake, warned Darin Stewart at the Gartner Catalyst event.
SAN DIEGO -- It may be early days for AI, but it's not too early to start thinking about the ethical considerations around the technology. In fact, today is a defining moment for AI ethics, said Gartner research vice president Darin Stewart at this week's Gartner Catalyst event in San Diego.
"The decisions we make now are going to sit at the core of our models for years and continue to evolve, continue grow and continue to learn," said Stewart during his session. "So we need to set them on a firm ethical foundation so that as they grow through the years they'll continue to reflect our values."
The crux is that, right now, embedding ethics in AI systems is a low priority for most. The focus is mostly on AI control -- making sure systems do what they're supposed to do and having a few corrective measures if things go wrong. But, as Stewart said, putting in a "big red button" to hit when things go very wrong isn't enough. By that time it's already too late.
What we need in our AI solutions is something Stewart calls "value alignment."
"We need some assurance that when these systems make predictions, come to conclusions, give us recommendations and make decisions that they're going to reflect our values," he said. "And not just our personal values as its creator, but the values of the organization we serve and the community or society that it exists within."
Stewart admitted that value alignment might seem above IT practitioners' paygrade, but he certainly doesn't see it that way.
"The developers, the engineers and the designers are in the vanguard," he said. "You all are best positions to take those steps to move us closer to that value alignment."
What steps can IT practitioners take to embed human values and ethics in AI?
A measure of fairness
For starters, they can make sure they aren't directly inserting bias into algorithms. Bias is more or less ubiquitous in machine learning, Stewart said. It arises when the AI model makes predictions and decisions based on sensitive or prohibited attributes -- things like race, gender, sexual orientation and religion.
Darin Stewartresearch vice president, Gartner
Fortunately, there are effective techniques for removing bias in data sets, Stewart said. But IT practitioners need to make sure it never gets in the model in the first place. At the very beginning, they need to articulate -- and document -- measures of fairness and the algorithm's priorities. A measure of fairness is what constitutes equitable and consistent treatment for all groups participating in a system or solution. Stewart said there are a lot of guidelines available for use -- many for free.
"At the very beginning, when you're building the product, decide explicitly and intentionally what that measure of fairness is going to be and optimize your algorithms to reflect both the values statement you create and that measure of fairness," he said. "Then use the boundaries as constraints on the training process itself."
The United States has had a doctrine of discrimination since the Civil Rights Act of 1964, which defines types of discriminatory intent and permissible use of race. Start there, Stewart said.
"Ideally, we will have higher standards, but the locally- set, legal definition of acceptable discrimination should be the bare minimum you work with," he said.
Pay attention to your AI solution in the real world
AI and machine learning systems rarely behave the same in the real world as they did in testing, Stewart said. That can pose a problem for ensuring ethics in AI.
"The problem is that once we release our solutions into the real world, we stop paying attention to the inputs it's feeding off of in the real world," he said. "We don't pay attention to the inputs that are still continuing to train and evolve the model, and that's where things start to go wrong."
He points to the now infamous case of the Microsoft Tay chatbot, which was the target of a malicious attack by Twitter trolls that biased the system beyond repair. That's why Stewart emphasizes the importance of consistently and continually inspecting training data.
"Kind of like stalking your kids on social media, you need to keep an eye on your solutions once they are released into the real world," Stewart said. That includes regularly testing and auditing the AI systems.
Instilling ethics in AI: The 'centaur' approach
At this point in time, Stewart said we usually don't want to turn decisions entirely over to the AI system. If you are just trying to automate data entry into a spreadsheet or speed up the throughput of an assembly line, then it's fine to do so, he said. But if you are creating a system that is going to "materially impact someone's life," you should have a human being in the loop who understands why decisions are being made.
"You don't want a doctor performing surgery just because the AI said so," he said. Decisions ideally are made by combining machine intelligence, which can deal with exponentially more data than humans can, with human intelligence, which can account for factors that may not have been in the data set. "We're looking for centaurs rather than robots."