Getty Images
AI regulation talks heat up among U.S. policymakers
As Congress grows more concerned about the risks of AI, regulation is becoming a hot topic among policymakers.
The U.S. could see new AI regulation proposed by the end of the year as policymakers placed artificial intelligence in the hot seat this week.
During congressional hearings and meetings, policymakers focused on AI regulation and approaches such as establishing a licensing regime for AI models, creating an AI systems oversight body and even taking a risk-based approach similar to the EU AI Act, which places AI systems into different risk categories. Heightened federal discussion around AI regulation signals that Congress is worried about AI, said Darrell West, a senior fellow in the Center for Technology Innovation at Brookings.
"They can see it being used in a lot of business sectors. They're also worried about the possible election uses and how AI could be used for bad purposes," he said. "It's a bipartisan concern right now."
Sen. Richard Blumenthal (D-Conn.) on Tuesday, along with other members of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, grilled Brad Smith, president of Microsoft, and William Dally, chief scientist and senior vice president of research at Nvidia, about how AI systems affect society and in what ways policymakers should regulate it. Sen. Chuck Schumer (D-N.Y.) held a closed-door meeting Wednesday with big names in the tech industry aimed at advising the U.S. approach to AI regulation.
Blumenthal said Congress should provide enforceable safeguards for AI systems that can encourage trust and confidence in the evolving technology -- something he said will be coming by the end of 2023.
"Make no mistake, there will be regulation," Blumenthal said. "It should be regulation that encourages the best in American free enterprise, but at the same time provides the kind of protections that we do in other areas of our economic activity."
Policymakers consider approaches to AI regulation
Blumenthal and Sen. Josh Hawley (R-Mo.) recently introduced a bipartisan framework for AI regulation, which includes establishing a licensing regime for AI systems run by an independent oversight body. The licensing requirements would include pre-deployment testing, adverse incident reporting programs and registration of information about the AI model.
Darrell WestSenior fellow, Center for Technology Innovation at Brookings
The independent oversight body would have authority to audit AI companies seeking licenses and act as an enforcement authority when AI models breach data privacy and violate civil rights.
While Microsoft's Smith voiced support for a licensing regime, he opposed a single agency to oversee it. Instead, Smith said each federal enforcement agency should learn how to assess AI systems due to the breadth of industry where AI can be applied, from medical and pharmaceutical to defense and IT.
"Especially for the frontier models, the most advanced, as well as certain applications that are highest risk, frankly, you do need a license from the government before you go forward. That is real accountability," Smith said. "I do think that it would be a mistake to think that one single agency or one single licensing regime would be the right recipe to address everything, especially when we think about the harms that we need to address."
In this approach, policymakers must determine where to draw the line between models requiring deployment licenses and those that can be deployed freely without oversight. Nvidia's Dally emphasized that this should involve weighing the risks presented by the model while still fostering continued innovation.
An AI model used in determining a medical procedure, for example, poses a high risk since it's tied to patient outcomes, Dally said.
"If you have another model, which is controlling the temperature in your building, if it gets it a little bit wrong and you consume a little too much power, it's not a life-threatening situation," he said. "You need to regulate the things that have high consequences if the model goes awry."
Brookings' West said policymakers are on target with distinguishing low-risk from high-risk AI applications.
"Focusing on the high-risk applications is definitely the way to go," he said.
Other hurdles could slow AI regulation proposals
While Blumenthal aims to get AI regulation proposed by the end of the year, it might be difficult given recent events that could distract from that goal, West said. Not only is Congress facing a potential government shutdown this month, but House Republicans opened an impeachment inquiry into President Joe Biden.
"There are a lot of distractions now that could derail any type of legislation," West said. But in one to two years, he added, "the prospects are pretty good for new federal legislation on AI. That topic is getting a tremendous amount of attention, there are concerns at many different levels, and both Republicans and Democrats are worried about this."
Indeed, there will even be challenges in Congress to proposals such as establishing a licensing regime and especially creating a new, single oversight body, said Todd Wooten, an attorney at Holland & Knight who represented the nonprofit Center for Humane Technology during Schumer's closed-door meeting Wednesday. However, Wooten said if Congress can, at minimum, establish some sort of liability mechanism, it would be a good start to addressing some of AI's risks.
"Just doing that would lead to a lot of good discussions within these companies, which aren't currently happening because liability isn't a priority because it's not there," he said.
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.