Getty Images/iStockphoto

California AI bill sets guardrails that draw criticism

The debate around California's AI bill, SB 1047, centers on potential harm to startups and stifling innovation. However, some applaud the bill's whistleblower protections.

California's AI bill has drawn significant scrutiny. While some believe any guardrails are better than none when it comes to the rapidly evolving technology, others say the bill could negatively affect small businesses and hurt innovation.

Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would require advanced AI system developers to test models costing a minimum of $100 million to train for their ability to cause harm, as well as implement guardrails to mitigate that risk. The bill creates whistleblower protections for employees of large AI companies and sets up CalCompute, a responsible AI development public cloud computing cluster for startups and researchers. The bill awaits California Governor Gavin Newsom's signature by Sept. 30 before it becomes law.

The California AI bill has garnered support from top AI experts like former Google AI lead Geoffrey Hinton and Canadian computer scientist Yoshua Bengio. However, the bill also faces criticism. Rep. Nancy Pelosi (D-Calif.) and other members of the U.S. House of Representatives sent a letter to Newsom opposing the bill, stating that "this bill would not be good for our state, for the start-up community, for scientific development, or even for protection against possible harm associated with AI development." ChatGPT developer OpenAI has also opposed the bill.

I do think it's really important to regulate this technology. Maybe some regulation, even if it's bad, is better than no regulation.
Avivah LitanAnalyst, Gartner

Gartner analyst Avivah Litan said she sees both sides of the dilemma facing California's AI bill. Though Litan believes the bill's thresholds for AI safety testing could harm small businesses, she said AI needs regulation.

"I do think it's really important to regulate this technology," she said. "Maybe some regulation, even if it's bad, is better than no regulation."

California AI bill has pros and cons

Steve Carlin, CEO of AiFi, based in Burlingame, Calif., described the AI bill as dense and said it tries to address too many concerns. AiFi makes AI-enabled optimization tools for retailers.

Not only does the California AI bill include vague, difficult-to-interpret terms such as "reasonable assurance" and "reasonable margin for safety," Carlin said, but it also requires AI developers to create testing criteria based on industry best practices without providing additional guidance.

"This is a significant challenge in a globally connected industry still establishing its standards," he said. "Furthermore, the bill does not clarify who will enforce these regulations or what qualifications they need to understand the technology they are reviewing."

Instead of attempting to regulate AI models, a better AI bill would follow in the footsteps of the EU AI Act and focus on risks and applications, according to Litan. The EU AI Act has also received both positive and negative feedback.

"It never made sense to me to regulate technology," she said. "How can you regulate math? You should regulate the application. I feel like that would be a much smarter approach."

Litan argued that the California AI bill could negatively affect startups, which she said could raise $100 million in funding and fall under the bill's compliance requirements. Regulatory compliance costs can add up for small businesses, she said.

Any thresholds set for AI models today will also likely be outdated within the next couple of years, given how the technology is developing, Litan added.

"On the one hand, I think it's really going to stifle innovation and there's a much smarter way to do it," she said. "On the other hand, you see these brilliant people like Yoshua Bengio and Geoffrey Hinton supporting the bill. When they do that, you think, 'Well, they know something I don't.'"

Forrester Research analyst Alla Valente applauded the bill's inclusion of whistleblower protections, which shield employees who come forward with information regarding issues discovered within companies' AI models. She defined the measure as critical for enhancing AI safety.

Valente said she also struggles to see why some of the bill's provisions, including safety testing requirements for AI models, are creating a divide within the industry. She argued that AI has become pervasive not just in the technology industry, but in critical infrastructure, government and business, and it needs safety testing.

Valente acknowledged that implementing security measures and AI model testing takes resources, which might be a concern for small businesses. While large companies could bake safety and security processes into their operations, startups might lack funds for extensive measures. However, she said there's "always a cost to doing something right."

Even if small businesses might save on costs upfront without the testing requirements, they will still face those costs down the road -- and even potentially face lawsuits from customers who suffer negative consequences as a result of the company's AI model.

"If you're building a skyscraper, wouldn't you test the safety and stability of the foundation?" Valente said. "I don't know why they're pushing this buck down the road, but it's going to come due."

Bill adds to patchwork of AI laws in U.S.

California isn't the first state to advance an AI bill. Earlier this year, Colorado passed comprehensive AI legislation, while Connecticut lawmakers advanced an AI bill to regulate private sector deployment of AI models. Even cities like New York have passed AI bills targeting algorithmic bias.

AiFi's Carlin described navigating an increasingly complex network of state AI regulations as a "daunting and costly task for startups."

"A state-by-state approach, as seen with SB 1047 and other bills in California, could lead to a fragmented regulatory landscape, creating a nightmare for AI developers," he said. "California is attempting to address an issue that should be handled by the federal government."

Valente said multiple AI bills have been proposed at the federal level without success, meaning it's unlikely that Congress will vote on a federal AI standard anytime soon.

"If there was a federal alternative to this state-by-state, piecemeal approach, I certainly would welcome it, and I think a lot of organizations would welcome it as well," Valente said. "Unfortunately, in the U.S., waiting for Congress to act on something not only takes a really long time, but the consequences you're trying to avoid have already happened."

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy