sdecoret - stock.adobe.com
US works to develop AI standards while California legislates
California has proposed AI regulation as the U.S. works to develop AI standards by bringing together companies such as Apple, Amazon, Google and Microsoft.
While the Biden administration is working to develop AI safety standards, California may move faster and make AI safety the law.
Califiornia Sen. Scott Wiener (D-San Francisco) introduced Senate Bill 1047 on Thursday, AI legislation that would establish safety standards for developers of large AI systems. The standards would not apply to startups developing "less potent models," according to a statement. By "less potent AI," Wiener is likely referring to AI systems known as narrow or weak AI, which are designed for specific tasks, such as predictive modeling. The bill also establishes a public cloud offering called CalCompute to enable startups, researchers and others to participate in developing large AI systems.
The proposed legislation follows on the heels of the Biden administration's creation of the U.S. AI Safety Institute Consortium (AISIC), which will help meet the goals Biden set out in his executive order on AI last year. The consortium will help develop guidelines for red-team safety testing, risk management, safety, security and watermarking AI-generated content. It brings together more than 200 leading AI stakeholders to weigh in on the development of AI safety standards, including Amazon, Apple, Google, Adobe, Accenture, Intel, Meta, Microsoft, Nvidia, OpenAI, Salesforce and Workday.
Wiener recognized that developers have pioneered safe development practices for AI systems. However, the release announcing the proposed AI regulation said that California's government "cannot afford to be complacent" about regulating AI.
"With Congress paralyzed and the future of the Biden administration's executive order in doubt, California has an indispensable role to play in ensuring that we develop this extremely powerful technology with basic safety guardrails," according to the statement.
California wants to build off US standards with AI regulation
The proposed AI legislation in California aims to advance AISIC's efforts in AI safety by codifying its best practices into law, according to the statement.
Sen. Scott WienerD-San Francisco
Wiener said that the proposed legislation offers a chance to use insights gained from past failures to regulate other technologies.
"We've seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding or mitigating the risks," he said in the statement. "SB 1047 does just that, by developing responsible, appropriate guardrails around development of the biggest, most high-impact AI systems to ensure they are used to improve Californians' lives, without compromising safety or security."
Some of the AI safety standards set out in SB 1047 include pre-deployment safety testing and implementation of cybersecurity protections for a handful of "extremely large AI developers," according to the release. The bill creates "no new obligations for startups or business customers of AI products."
Congress has yet to advance any AI legislation, and policymakers are unsure about what AI legislation might be adopted in 2024. The Biden administration has taken steps through executive powers to advance AI safety standards, including creating the consortium and relying on public sector partners to help develop those guidelines.
"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Gina Raimondo, U.S. secretary of commerce, said in a statement. "By working with this group of leaders from industry, civil society and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America's competitive edge and develop AI responsibly."
California's advancement of AI regulation could create confusion for businesses if they're eventually forced to navigate multiple sets of AI safety standards, especially if individual states' rules conflict with broader U.S. standards, said Hodan Omaar, senior policy analyst at the Information Technology and Innovation Foundation's Center for Data Innovation.
"Creating California-specific safety standards for large-scale AI systems is problematic because it sets the stage for a patchwork of safety rules, with different states coming up with their own diverse sets of standards that providers of these systems will have to navigate," she said. "A national set of standards that preempts states from creating their own standards is a much better approach."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.