Getty Images/iStockphoto

IBM's Policy Lab calls for AI regulation

Launched during the Davos World Economic Forum, IBM's Policy Lab seeks to unite governments and businesses on creating and adopting more regulation of AI.

IBM's Policy Lab is looking to bring together U.S. and European standards bodies and governments to agree on a set of formalized standards to guide the development, maintenance and regulation of AI-based systems.

The Policy Lab, which formally launched at the Davos World Economic Forum late last month, released a document calling for AI regulation based on accountability, transparency, fairness and security. At the heart of the group's mission is to help policymakers take advantage of the benefits offered by AI systems, but at the same time build societal trust in the context of a rapidly changing technology landscape.

The Policy Lab is gathering industry support to establish formal standards through organizations like the National Institute of Standards and Technology (NIST) and European-based transactional standards bodies such as the International Standards Organization (ISO) and the European Committee for Standardization (CEN), said Ryan Hagemann, co-director of IBM's Policy Lab. 

"We'll work with industry and governments that already have regulatory frameworks in place to start driving the conversations forward," he said.

In mid-February, the European Commission (EC) is scheduled to release a white paper outlining on what it believes an AI regulatory governance framework should look like, and IBM intends to work with the EC, Hagemann said.

"We will be looking for a positive way to work with members of the Commission on some of their proposals and figure out how that engagement can work over the next six months," Hagemann said.

One of the top priorities of the Policy Lab, as they work in concert with standards-setting bodies and governments, is to create rules that eliminate bias in AI systems, which could rely on old data that could bake in discriminatory practices against minorities, women and older Americans into new AI systems.

Guidelines for regulating AI
Guidelines for regulating AI

"Bias is a concern for a lot of people so companies need to take responsibility for testing for bias early in the development lifecycle of their AI systems," Hagemann said. "And they need to be tested regularly thereafter."

Microsoft, Google join IBM in push for AI regulation

The heads of Microsoft and Google publicly called for AI regulations. According to Martin Sokalski, principal for emerging technology risks at KPMG, the recent calls appear to be genuine.

There's a trust gap between AI capabilities and user adoption, which is largely attributed to a lack of knowledge around the technology, the prevalence of black box AI, issues around bias in AI, and questions about its accuracy, Sokalski said.

"We have not currently bridged the gap between how to trust in the AI system and understand how it is working, which means when outcomes from an AI system are not understood, we lose confidence that we can rely in the outcome," Sokalski said.

Establishing trust is also an important focus of IBM's Policy Lab; without it corporate users are going to be very slow to adopt AI systems as a core component in their data centers, Hagemann said. However, he sees growing support among standards bodies and governments to support this initiative, noting that Japan and Singapore are also taking a similar multi-stakeholder approach to AI to establish an AI governance framework.

"There is a general consensus based on broad principles of what matters in the AI ecosystem, with trust being foremost among those principles," Hagemann said. "Trust is well represented in the OMB [Office of Management and Budget] guidance where it is their first and leading principal."

The White House also kicked off 2020 with a memorandum containing guiding principles organizations should consider when drafting and adopting regulations for AI. A draft of the memo, released Jan. 7, highlights 10 policy considerations that can guide organizations and government agencies when crafting AI regulations. However, the memo calls for a light touch, noting that too many regulations could stifle the technology industry and slow economic growth.

Several days later, Alphabet's new CEO, Sundar Pichai, wrote there is "no question" AI needs regulation in a January 20 editorial for The Financial Times, while also seemingly advocating for a light touch when crafting regulations.

Users remain skeptical of AI

Despite the growing consensus among vendors, standards bodies and governments, many still can't decide if AI will be used for good or for evil.

"The media is constantly flooded with two sides of this story: one being that AI has arrived and will change the world (primarily from the technology suppliers); and the other being all that has gone wrong with AI," Sokalski said.

To improve that paradox, AI vendors want increased guidance on how to govern AI technologies and manage implications when a decision goes wrong, he added.

The media is flooded with two sides of this story: one that AI has arrived and will change the world; and the other being all that has gone wrong with AI.
Martin SokalskiPrincipal, KPMG

"Many believe that regulation can provide a broad construct to which organizations implementing these technologies can be proactive in how to govern, manage and instill trust in their technology without requiring consumers and business users of the technologies to be data scientists or experts in how it works," he said.

Additionally, said Traci Gusher, principal for data and analytics at KPMG, organizations expect regulation at some point and want to know what those regulations may be to better prepare for them.

"[Organizations are] saying we want to know if you're going to regulate this and how you're going to regulate this," she said.

Over the next year or so, most nations, including the U.S., will likely continue to be cautious about over-regulating AI, Sokalski predicted. A binding global regulation will likely not happen within the next decade, but "standardized global AI governance frameworks, with common AI principles, can be expected to materialize within two to three years in the relatively AI-mature economies."

"It's a balance of creating significant, scalable and trusted AI capabilities to emerge as a global player in this space versus stifling innovation through over-regulation," he said.

Dig Deeper on AI business strategies