peshkov - stock.adobe.com
Examining both sides of the AI regulation debate
Some government and business leaders want more AI regulation to ensure that systems don't discriminate against or harm others. Others say too much regulation will stifle innovation.
As organizations begin moving AI technologies out of testing and into deployment, policymakers and businesses have started to understand just how much AI is changing the world. That realization has set off AI regulation debates within government and business circles.
Already, AI is dramatically boosting productivity, helping connect people in new ways and improving healthcare. However, when used wrongly or carelessly, AI can cut jobs, produce biased or racist results, and even kill.
AI: Beneficial to humans
Like any powerful force, AI, specifically deep learning models, requires rules and regulations for its development and use to prevent unnecessary harm, according to many in the scientific community. Just how much regulation, especially government regulation of AI, is needed is still open to much debate.
Most AI experts and policymakers agree that a simple framework of regulatory policies is needed soon, as computing power increases steadily, AI and data science startups pop up almost daily, and the amount of data organizations collect on people grows exponentially.
"We're dealing with something that has great possibilities, as well as serious [implications]," said Michael Dukakis, former governor of Massachusetts, during a panel discussion at the 2019 AI World Government conference in Washington, D.C.
The benefits of AI regulation
Many national governments have already put in place guidelines, although sometimes vague, about how data should and shouldn't be used and collected. Governments often work with major businesses when debating AI regulation and how it should be enforced.
Some regulatory rules also govern how AI should be explainable. Currently, many machine learning and deep learning algorithms run in a black box, or their inner workings are considered proprietary technology and sealed off from the public. As a result, if businesses don't fully understand how a deep learning model makes a decision, they could overlook a biased output.
Michael DukakisFormer Governor of Massachusetts
The U.S. recently updated its guidelines on data and AI, and Europe recently marked the first anniversary of its GDPR.
Many private organizations have set internal guidelines and regulations for AI, and have made such rules public, hoping that other companies will adopt or adapt them. The sheer number of different guidelines that various private groups have established indicates the wide array of different viewpoints about private and government regulation of AI.
"Government has to be involved," Dukakis said, taking a clear stance in the AI regulation debate.
"The United States has to play a major, constructive role in bringing the international community together," he said. He said that countries worldwide must come together for meaningful debates and discussions, eventually leading to potential international government regulation of AI.
AI regulation could hurt businesses
Bob Gourley, CTO and co-founder of consulting firm OODA, agreed that governments should be involved but said their power and scope should be limited.
"Let's move faster with the technology. Let's be ready for job displacement. It's a real concern, but not an instantaneous concern," Gourley said during the panel discussion.
While the COVID-19 pandemic has shown the world that businesses can automate some jobs, such as customer service, fairly quickly, many experts agree that most human jobs aren't going away anytime soon.
Regulations, Gourley argued, would slow technological growth, although he noted AI should not be deployed without being adequately tested and without adhering to a security framework.
Several speakers argued that governments should take their lead from the private sector during other panel discussions at the conference.
Organizations should focus on creating transparent and explainable AI models before governments concentrate on regulation, said Michael Nelson, a former professor at Georgetown University.
Lack of explainable or transparent AI has long been a problem, with consumers and organizations arguing that AI providers need to do more to make the inner workings of algorithms easier to understand.
Nelson also argued that too much government regulation of AI could quell competition, which, he said, is a core part of innovation.
Lord Tim Clement-Jones, former chair of the United Kingdom's House of Lords Select Committee for Artificial Intelligence, agreed that regulation should be minimized but can be positive.
Governments, he said, should start working now on AI guidelines and regulations.
Guidelines like the GDPR have been effective, he said, and have laid the foundation for more focused government regulation of AI.