Getty Images/iStockphoto

AI rules take center stage amid growing ChatGPT concerns

As countries grapple with regulating artificial intelligence tools such as ChatGPT, businesses should prepare for the likelihood of future regulations.

Concerns about AI and tools such as OpenAI's large language model ChatGPT are turning into action at national levels as governments grapple with rules and policies for the technology.

The Italian government earlier this month banned ChatGPT after voicing concerns about data privacy, though it announced it will temporarily lift the ban should OpenAI comply with a list of demands. Now, the French government is assessing the tool, and the European Data Protection Board created a task force focusing on ChatGPT and AI privacy rules.

In the U.S., the White House wants more information about AI-related risks. On Tuesday, the Department of Commerce's National Telecommunications and Information Administration launched a request for comment on policies to ensure AI accountability.

So far, the U.S. has yet to advance any rules or regulations around AI. Instead, the White House released a Blueprint for an AI Bill of Rights last year to guide businesses on ethical AI implementation. The Department of Commerce's inquiry will inform the Biden administration's approach to AI risks.

"Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms," said Alan Davidson, a U.S. assistant secretary of commerce, in a statement. "Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems."

Image depicting ASBPE 2024 regional silver award recognition

Italy's ChatGPT ban is likely spurring greater interest around AI and data privacy concerns, which many governments already have, Gartner analyst Nader Henein said. As businesses increasingly use tools such as ChatGPT, Henein said it will be important for CIOs and other business leaders to stay alert for regulatory changes.

"They shouldn't jump in with both feet just so they can say, 'Now I have a generative AI chatbot on my platform to help me do this,'" Henein said. "Beware of shiny objects."

Italy's ban a warning

Italy's ban wasn't about ChatGPT's technology, but OpenAI's lack of compliance with the European Union's GDPR, Henein said. The jump in adoption of ChatGPT likely overwhelmed the company, placing it in a compliance predicament.

Henein said he's more concerned with how governments outside Italy might regulate AI-based tools. Regulators will likely take the approach of GDPR by placing responsibility on the shoulders of businesses that are using tools such as ChatGPT, which is why he said business leaders should be keeping future regulation of large language models front of mind.

Beware of shiny objects.
Nader HeneinAnalyst, Gartner

Henein said fast-adopting businesses risk developing a dependency on these new technologies and risk suddenly becoming noncompliant once the regulations change.

"You can't really cherry-pick information out of those models -- that's not how they work," he said. "You can't roll back to a certain point and say, 'I'm going to remove that piece of information.'"

Regulators face tough road ahead for AI rules

Beyond data privacy, there are other concerns with generative AI technologies, said Arthur Herman, a senior fellow at the research organization Hudson Institute.

Concerns include the amount of data that large language models collect and use to power the machine learning models, which could feature potentially protected data such as copyrighted material, he said. The potential disruption that the technology might cause to jobs, the economy and social structure is another issue.

"There's a great wave of fear that's arisen about AI," Herman said.

However, Herman issued a word of caution for regulators. He said that while holding businesses accountable for harms caused by such technologies is important, building trust into systems is more important than more regulations.

Indeed, in response to the Department of Commerce's request for comment, the Center for Data Innovation said the growing chorus of alarm about AI systems threatens the U.S.'s "innovation-friendly approach to the digital economy."

"The best way to achieve better outcomes for consumers is not to bog down companies using algorithms with new regulations -- even the most extensive internal reviews will not be able to predict all potential pitfalls -- but rather to hold companies strictly accountable for monitoring their use of algorithms and mitigating potential harms," said Hodan Omaar, senior policy analyst at the Center for Data Innovation, in a statement.

OpenAI responded to numerous concerns about its technology, including data privacy, in a blog post published earlier this month. The company said it spent six months assessing its GPT-4 iteration to understand its risks and benefits better, but noted that it might be necessary to take more time to improve AI systems' safety.

"[P]olicymakers and AI providers will need to ensure that AI development and deployment is governed effectively at a global scale, so no one cuts corners to get ahead," the OpenAI blog post said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Tech News This Week 04-14-2023

Next Steps

OpenAI takes privacy step by changing ChatGPT data settings

For enterprise generative AI adoption, custom models are key

Dig Deeper on CIO strategy