Global Artificial Intelligence Conference 2018: The value of context

Speakers at the Global Artificial Intelligence Conference 2018 confronted issues that arise when ignoring context and reality when developing and implementing AI.

Would you recognize an elephant if it was in your living room? The short answer: Of course you would, but AI programs can't. And it's not for a lack of inherent intelligence; it's from a lack of context.

AI can recognize an animal and shapes, but it would not be able to recognize that an elephant is in a bedroom and outside of its typical habitat.

These complications were a recurring theme at the Global Artificial Intelligence Conference 2018 in Boston this week: Technology cannot be developed in a vacuum. And when developing and implementing AI, it's important to give the technology -- and ourselves -- context of use.

"We really ought to help organizations learn and get better at using these systems [that are] so sensitive and responsive to biases. It's about organization and ability to learn and use data," tech analyst Joe Barkai said.

The risks of discriminatory intelligence

Presenters at the conference noted that AI itself must be supplied with context and be developed with the idea that AI exists in a human-centric society that influences the way it's programmed.

Consultant, author and speaker Joe Barkai presents at the Global Artificial Intelligence Conference 2018
Consultant, author and speaker Joe Barkai presents at the Global Artificial Intelligence Conference 2018

Barkai cited a study by Harvard professor Latanya Sweeney, --the "Discrimination in Online Ad Delivery Project," which found "statistically significant discrimination" in online advertisements.

"The AI algorithms are extremely sensitive to what's in the data; they are very sensitive to the biases in the data," Barkai said. "Society is biased. Therefore, data is biased. And, therefore, machine-making algorithms are biased."

Barkai said developers should either recognize that the data will be flawed and correct for that, or take the output data with a grain of salt. They should recognize that biased input will create biased output and be transparent when releasing the information about the potential discriminatory flaws in AI.

How to deliver the most value

When AI is developed out of context, it can become obsolete immediately. Barkai pointed to when AI was first developed to replace a motor in a starter jet. Because the developers of the robotic assistance didn't supply context -- intelligence of the mechanic team and processes of a mechanic's work -- the AI only assisted in simple tasks and ended up making the existing motor-replacement process even more difficult, Barkai said.

"Our relationship to AI demonstrates huge insensitivity to needs and work environment," he added.

Society is biased. Therefore, data is biased. And, therefore, machine-making algorithms are biased.
Joe Barkaiconsultant, author and speaker

Michael Roytman, chief data scientist at Kenna Security, based in San Francisco, discussed how the implementation of AI in a security setting can gauge the real value of predictive analytics. However, while advanced AI algorithms can detect vulnerabilities, it still needs to work alongside humans to be programmed for maximum efficiency.

"Attackers are about 60 to 80 days out from the vulnerability being discovered to using it for attack," Rotyman said.

If the rate of exploitation and rate of remediation is the same -- 60 to 90 days -- companies are essentially just treading water, he added.

With only roughly 2% of all vulnerabilities successfully exploited, Rotyman suggested implementing a smarter and more context-driven AI system with technology that can identify the vulnerability and assess its potential exploitation. When a new vulnerability is discovered, an AI system developed with context can evaluate risk and determine whether businesses should view this new vulnerability as a severe, moderate or mild threat.

Looking to the future

Barkai and Rotyman both stated that when implementing AI in the workplace, companies should consider the needs of workers first and choose the AI tech that will best bridge the gap between human and automated workers.

In addition to integrating context in AI design, there are ethical issues, as well, including IT workers' well-documented fears about AI taking over their jobs or making them harder.

"We need ethical considerations. We have to have open dialogue -- AI can't be something that developers just do a lab," Barkai said.

Dig Deeper on Digital transformation