Google, OpenAI target state laws in AI action plan
The federal government is developing plans for AI policy in the U.S. Stakeholders want a federal policy preempting state laws as a top strategic priority.
While President Donald Trump's administration has been focused on moving away from regulation, leading AI vendors such as Google and OpenAI want the government's pending AI action plan to include federal policy preempting the growing patchwork of state AI laws in the U.S.
The White House Office of Science and Technology Policy (OSTP) requested input from stakeholders on developing an AI action plan. It recently closed the public comment period, receiving more than 8,700 submissions. OSTP asked interested parties to outline priority actions to support U.S. dominance of AI technology without excessive regulation that would hamper private sector innovation in AI. For some big tech companies, tackling state AI laws should be one of the U.S. government's top priorities.
The U.S. should adopt policy frameworks that "preempt a chaotic patchwork of state-level rules on frontier AI development," according to Google's submission.
Meanwhile, OpenAI called for freedom to innovate in the U.S. national interest and neutralize competitors such as China benefiting from "American AI companies having to comply with overly burdensome state laws." A handful of U.S. states have passed comprehensive AI regulation, including Colorado, California and Utah.
Without a federal AI law, states implement individual AI requirements that create compliance challenges for businesses, Forrester Research analyst Alla Valente said. If the U.S. adopts an overarching federal AI policy, it could remove that burden.
"By leaving this up to the states, you can have 50 sets of AI regulations that all look vastly different," she said.
However, an executive order cannot preempt state AI regulations. It's up to Congress to pass a federal AI law -- something it's struggled to do.
AI action plan submissions include state, global focus
Lacking a unified AI governance approach in the U.S. is "ineffective and duplicative," said Hodan Omaar, a senior policy manager at the Center for Data Innovation, a technology policy think tank.
"It creates inconsistencies and incoherence in a U.S. approach," she said.
Beyond focusing on state laws, Valente said, Google's stance indicates that the company wants the U.S. to consider the global development of AI laws as well, such as the European Union's AI Act.
Any standard, policy or framework the U.S. creates should reflect American interests, but cannot ignore different countries' AI policies, she said. Google said that when working with aligned countries, the U.S. should "develop protocols and benchmarks around potential risks of frontier AI systems."
"To ignore what the rest of the world is doing around AI frameworks, AI governance [and] AI risk creates an even larger gap between U.S. innovation and the rest of the world ... How do you then remain competitive if other countries have requirements that can't be satisfied by U.S. AI innovation?" Valente said.
OpenAI also addressed export controls in its comments, asking for a strategy shift focused on promoting global adoption of U.S. AI systems while more strategically using export controls to maintain the U.S. AI lead. The company called for updating the AI diffusion rule that advanced U.S. export controls -- a rule proposed by the administration of former President Joe Biden that was met with heavy industry backlash.
The federal government has an important role to play in ensuring there are standards.
Hodan OmaarSenior policy manager, Center for Data Innovation
Meanwhile, in the Center for Data Innovation's comments, the think tank called for the U.S. AI action plan to reorient its export control strategy. While export controls are meant to weaken competitors, notably China's AI sector, they're "increasingly disadvantaging U.S. firms instead," the comments said. The rise of DeepSeek points to China's ability to innovate despite U.S. export controls on advanced AI chips.
Omaar outlined in the think tank's submission that the U.S. should establish a National Data Foundation dedicated to funding and facilitating the sharing of high-quality data sets for AI model development. She said the U.S. should also preserve, but refocus, the NIST AI Safety Institute to provide foundational standards for AI governance.
"The federal government has an important role to play in ensuring there are standards," Omaar said. "Ensuring NIST is able to do the important AI work they were doing is important to ensure smooth AI adoption."
What the final AI action plan could look like
OSTP's request for information on an AI action plan asked stakeholders for their thoughts on U.S. AI policy actions. Without providing recommendations or any potential framework for stakeholders to comment on, Valente said, it's unclear what the AI action plan will eventually include.
"What this plan ends up looking like, one can only imagine," she said.
Darrell West, a senior fellow at the Brookings Institution, said the White House's request for information indicates that the Trump administration will focus on dropping burdensome requirements and trusting private companies to innovate with less federal oversight.
"There will be fewer constraints on tech companies," he said. "They will be free to innovate in whatever direction they would like."
The federal government can balance AI safety and innovation, which will hopefully be reflected in the AI action plan, said Jason Corso, co-founder of AI startup Voxel51 and a computer science professor at the University of Michigan.
The general population is already skeptical of AI, and if general developmental growth challenges occur, it risks further undermining trust in the technology, he said. That's why policy frameworks should be created with AI safety in mind, Corso added.
A federal framework lacking AI safety considerations means that responsibility for AI safety decisions falls to company CIOs or chief AI officers, which Corso said presents a "big risk." The effect could be less AI adoption or slower ROI, he said.
"This contemporary AI is so nascent that despite the rapid advances we're seeing, there's actually rather little understood about its predictability, repeatability or even its robustness with certain types of questions or reasoning scenarios," he said. "We certainly do need innovation, but we also need safety."
Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining Informa TechTarget, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.