Getty Images/iStockphoto
California bill veto might push Congress to act on AI
California Governor Gavin Newsom took issue with SB 1047's broad regulation of AI systems without addressing specific concerns such as deployment in high-risk environments.
California Governor Gavin Newsom's decision to veto Senate Bill 1047, a controversial AI bill that would have implemented safety requirements for large AI systems, creates a void in AI regulation that could pressure Congress to act.
While the AI bill garnered support from AI experts like former Google AI lead Geoffrey Hinton and Canadian computer scientist Yoshua Bengio, it also faced significant criticism regarding the challenges such a bill could raise for startups and innovation. U.S. Reps. Zoe Lofgren (D-Calif.) and Nancy Pelosi (D-Calif.) applauded Newsom's decision to veto the bill. Lofgren said in a release that she believes this is "an issue that should be handled at the federal level."
"Congress and the administration are both moving on AI governance," she said.
Daniel Castro, vice president of the Information Technology and Innovation Foundation, said in a statement that other California AI bills Newsom signed earlier this month specifically targeting deepfakes and digital likenesses offer a more effective approach than SB 1047's broad regulation of the underlying technology. In his statement vetoing the California AI bill, Newsom noted he has signed 17 bills in the last 30 days regulating generative AI.
However, Castro argued that states should not be setting the bar for regulating AI, calling on Congress to act. Lawmakers have advanced some AI bills this year, but none have passed a floor vote in either the House or Senate. Congress is unlikely to take action before President Joe Biden's term ends this year, and presidential candidates Kamala Harris and Donald Trump would likely take different approaches to regulating AI.
"One of the most significant threats to U.S. leadership in AI is the potential emergence of a patchwork of conflicting state laws," Castro said. "Such a scenario would entangle U.S. AI firms in complex and costly regulations, impeding American innovation while global competitors, particularly China, advance unencumbered."
Newsom's AI bill veto receives praise, criticism
Newsom said in a statement that while the California AI bill was well-intentioned, it didn't account for specific factors such as whether an AI system was deployed in high-risk environments, involved critical decision-making or used sensitive data.
"The bill applies stringent standards to even the most basic functions so long as a large system deploys it," Newsom said. "I do not believe this is the best approach to protecting the public from real threats posed by the technology."
Daniel CastroVice president, Information Technology and Innovation Foundation
Castro applauded Newsom's decision to veto the bill. He said California "avoided a potentially devastating blow to its tech sector."
"The bill's proposed safety measures lacked an evidence-based foundation and appeared disconnected from the rapidly evolving AI safety discussions in industry and academia," Castro said in a statement. "Rushing to regulate such a complex and dynamic field would have been a grave error."
While the bill featured strong elements, it needed additional safeguards for small and medium-sized AI companies, said Arun Subramaniyan, founder and CEO of Silicon Valley-based generative AI company Articul8 AI, in a statement.
The California AI bill's compute requirements initially applied to large companies. But Subramaniyan said he feared that by the time it took effect in 2026, it could also include smaller firms.
Still, some view Newsom's veto of the California AI bill as a setback. Nicole Gill, co-founder and executive director of Accountable Tech, an organization dedicated to reining in large tech firms, described the veto as a "massive giveaway to big tech companies."
"This veto will not 'empower innovation,'" she said in a statement. "It only further entrenches the status quo where big tech monopolies are allowed to rake in profits without regard for our safety, even as their AI tools are already threatening democracy, civil rights, and the environment with unknown potential for other catastrophic harms."
Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.