Getty Images
Governments worldwide attempting to regulate generative AI
Countries worldwide are monitoring the generative systems with their own unique rules and laws. For example, China proposed new laws, and the U.S. requested public comments.
As generative AI tools become more popular, government entities worldwide are setting their unique stamp on how to curb the enthusiasm of enterprises and vendors through regulation.
Most recently, China swiftly released its latest AI rules moments after tech giant Alibaba formally revealed its version of ChatGPT on April 11.
Alibaba's AI chatbot
Introduced during a demonstration at the 2023 Alibaba Cloud Summit, Tongyi Qianwen -- which reportedly means "truth from a thousand questions" -- can draft invitation letters, plan itineraries and advise shoppers. The Chinese cloud vendor plans to integrate the large language model into its corporate applications, including its workplace messaging app DingTalk and voice assistant, Tmall Genie.
Users can begin registration for Tongyi Qianwen on April 14.
Alibaba's release comes after competitors SenseTime and Baidu recently launched their AI bots. SenseTime launched SenseChat and Baidu launched Ernie Bot.
These launches come as regulators worldwide attempt to navigate the popularity of generative AI systems and large language models.
China's response
Hours after Tongyi Qianwen's release, the Cyberspace Administration of China (CAC) proposed new laws, including having AI providers submit their products for review before releasing them to the public. The CAC also noted that AI-generated content should still include socialist values.
The proposed rules from the CAC follow the Chinese government's general approach when regulating websites, social media apps and services, said RPA2AI Research analyst and founder Kashyap Kompella. The CAC's proposed regulation aims to ensure providers of generative AI services minimize harm to users, do not violate copyrights, do not include inaccurate content, and do not generate sensitive content or content that criticizes China's regime, Kompella added.
"I don't think anyone was expecting anything different," he said. "The AI chatbot regulations are on par for the Chinese course."
AI regulation attempts worldwide
China is one of many countries taking steps toward regulating generative AI. On Tuesday the U.S. Commerce Department revealed that it would spend the next 60 days hearing the public's opinions on AI audits, risks and ways to make consumers feel more comfortable with AI systems.
Moreover, last month Italy banned ChatGPT, and the Italian Data Protection Authority (IDPA) ordered OpenAI to stop processing users' data as it investigates a breach of Europe's privacy regulations. Regulators in France, Ireland and Germany are also investigating if ChatGPT violated GDPR laws. On April 12, the IDPA gave OpenAI an April 30 deadline to comply with specific data guidelines before the AI vendor can once again operate in the country.
Kashyap KompellaFounder and analyst, RPA2AI Research
Despite attempts to regulate generative AI, many governmental systems are playing catch up.
"Regulation is struggling to keep pace with technology," Kompella said. Regulation is also not straightforward, with regulators needing to address training data, data protection and privacy, intellectual property rights, inappropriate use of the AI systems, and AI hallucinations, he added.
"Enterprises must also evaluate data security breaches while using such chatbots," Kompella said.
"There is still a wide set of unresolved questions around the if, what and how of regulating AI – and generative AI specifically," said Forrester Research analyst Rowan Curran. "The potential impact of regulation on how the technology is both developed and deployed will take some time to resolve."
However, countries looking to understand generative AI instead of outright banning specific systems can create a conversation around the appropriate use of generative AI models, said Sarah Kreps, director of the Cornell Tech Policy Institute at Cornell University.
"It turns towards this idea of explainable AI, which is, 'Let's understand the model,'" she said. "That is a fruitful direction compared to a ban."