sdecoret - stock.adobe.com
Lack of agreement on AI rules in U.S., EU gives China a leg up
While the EU and China have proposed AI rules, the U.S. is continuing a hands-off regulatory approach, which could affect its role in helping set the rules of the AI road.
In the global race to determine how best to govern the use of AI, it's crucial that democratic governments align on rules to counter China's influence and gain greater control over who sets the rules on what AI systems can do, where they can be deployed and what kind of data they can collect.
That's according to a panel of experts participating in a Brookings Institution webinar on the geopolitics of generative AI, which uses large language models to generate text, video and images. While Europe advances the AI Act to classify AI systems into categories of risk, the U.S. has been hesitant to introduce federal regulatory measures for AI, relying instead on state governments to enact their own AI rules and for companies to take responsibility for the AI systems they create.
On Friday, the White House provided another example of how the Biden administration is relying on companies to take responsibility. Microsoft, Meta, Google, Amazon, OpenAI, Anthrophic and Inflection have voluntarily committed to develop safe and transparent AI technology. The companies committed to testing their AI systems internally and externally before releasing the technology and developing technical mechanisms like watermarking to identify AI-generated content.
But relying on approaches like trusting companies to create responsible AI systems won't be enough to allow the U.S. to come to the international table and negotiate on the best approach to AI governance, said Marietje Schaake, international policy director of the Stanford Cyber Policy Center, during the webinar.
"The task for the U.S. government is to make it much more clear what kind of model of regulation it believes in," she said. "Negotiating internationally or coming to any table, whether it's for dialogue or for anything else, without more clarity on what the model looks like that you want to put up for discussion is hard."
Collaborating on AI rules to counter China
China has proposed its own rules around AI use. However, it's currently facing the challenge of aligning the Chinese Communist Party's desire for control over information dissemination from new AI models that don't always allow for significant levels of control, said Samm Sacks, senior fellow at the Paul Tsai China Center at Yale Law School, during the webinar.
Meanwhile, she said there's a lack of global discussion on how democratic governments plan to exist side-by-side with an authoritarian government that uses technology like AI to stay in power and monitor civilians.
Marietje SchaakeInternational policy director, Stanford Cyber Policy Center
"There is so much discussion around export controls and investment restrictions and how do we collaborate with like-minded governments," she said. "But the core question, to me, is how do we co-exist with China in this space?"
Schaake said that's where the U.S. is at a disadvantage for lacking a regulatory model for AI when it comes to negotiating co-existence with other AI models globally.
"The price the U.S. will pay for inaction in that sense, or for trusting the market or choosing a hands-off approach in international negotiations, will become clear," she said.
While some voice concerns that regulating emerging technologies like AI will harm innovation, that's not the case, argued Chris Meserole, director of the AI and emerging technology initiative at Brookings. Meserole said regulating AI and having accountable, transparent systems would be in the U.S.'s long-term strategic interest when it comes to connecting with allies around AI use.
"If we are able to put forward trustworthy AI and regulate and govern these technologies effectively, we will be better positioned to recruit allies and partners globally and have a stronger case when China or other authoritarian regimes start making their pitch for their governance model of AI," he said during the webinar.
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.