KOHb - Getty Images

Effort to pause AI development lands with thud in Washington

Rapidly advancing AI systems are dangerous, according to Tesla's Elon Musk and Apple's Steve Wozniak. So far, policymakers have yet to respond to the letter.

Policymakers aren't rushing to respond to an open letter published earlier this week asking for a pause on AI development and action from governments on advancing AI rules.

SpaceX, Twitter and Tesla CEO Elon Musk and Apple co-founder Steve Wozniak signed the letter, published by the Future of Life Institute, asking for a six-month pause on training AI systems with more power than the popular GPT-4 model. Among the other 2,000 signers were mostly academics and policy professionals, including a sprinkling of startup CEOs.

The letter also called on policymakers to "dramatically accelerate" AI governance systems. Such AI development rules should include new regulatory authorities focused on AI, oversight of advanced AI and watermarking systems to identify real versus AI-generated content, according to the letter. In addition, the letter said AI rules and regulations should include auditing and certification, liability for harms caused by AI, and public funding for AI safety research and disruptions caused by AI.

The letter is meant to serve as an alarm bell for policymakers and regulators worldwide as AI capabilities rapidly accelerate, said Chris Meserole, director of the Brookings Institution's Artificial Intelligence and Emerging Technology Initiative. However, Meserole said many government agencies and officials are already paying attention and acting on concerns around AI.

So far, policymakers have been quiet in the aftermath of the letter. Sen. Gary Peters (D-Mich.), for instance, who led a hearing earlier this month on AI risks and opportunities, has yet to tweet or issue a statement responding to the letter. The White House and lawmakers generally didn't acknowledge the letter.

It only feeds into AI hype and fears that will distract policy conversations away from how to make AI systems safe to simply how to make them stop.
Hodan OmaarSenior policy analyst, ITIF's Center for Data Innovation

The letter lacks evidence to support its claims of the "unprecedented existential risk" posed by advanced AI systems, argued the Center for Data Innovation, part of technology policy think tank the Information Technology and Innovation Foundation (ITIF), in a blog post. The letter writers claimed otherwise, warning about the development of "nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us."

But Hodan Omaar, a senior policy analyst at ITIF's Center for Data Innovation, isn't buying it.

"This letter and the hypothetical scenarios it paints are so far removed from the practical realities of large language models," Omaar said. "It only feeds into AI hype and fears that will distract policy conversations away from how to make AI systems safe to simply how to make them stop. That is neither helpful nor realistic."

The White House responded to a question about the letter during its daily press briefing Thursday, noting that a number of the issues raised in the letter are addressed in the Blueprint for an AI Bill of Rights that the administration released last year.

"There's a comprehensive process that is underway to ensure a cohesive federal government approach to AI-related risks and opportunities, including how to ensure that AI innovation and deployment proceeds with appropriate prudence and safety foremost in mind," White House Press Secretary Karine Jean-Pierre said.

Governments lay the groundwork for rules on AI development

A number of "vital new policy initiatives" for AI rules and regulations have already been launched over the last several years, including the European Union's AI Act and the Blueprint for an AI Bill of Rights in the U.S., Brookings' Meserole said.

The EU's AI Act classifies AI systems into different categories of risk. High-risk AI systems include resume-scanning tools that rank job applicants, and companies that make such systems would need to follow specific legal requirements for the technology. The AI Act is expected to pass sometime in 2023.

Meanwhile, the U.S. Blueprint for an AI Bill of Rights guides businesses on how to ethically implement AI systems, focusing on problems such as algorithmic harms caused by biases. Though the blueprint functions as a guideline, it could eventually inform U.S. AI regulation, which AI experts have called for. States including New York, Maryland and Illinois have also passed laws regulating automated employment decision-making tools that use AI.

In addition, the U.S. released a report earlier this year outlining plans to build a National AI Research Resource to increase public access to data and other tools for AI research.

"The open letter seems to assume we need a fundamentally new governance regime for AI, but that couldn't be further from the truth," Meserole said. "What we really need is to upgrade the legislative and regulatory efforts already underway."

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy