sdecoret - stock.adobe.com

The accelerating use of generative AI may prompt U.S. action

Generative AI tools like ChatGPT do everything from write code to detect network vulnerabilities. But the tools also carry risks, which might spur government involvement.

The growing use of generative artificial intelligence tools like ChatGPT might spur the federal government to provide guidance or even regulation for the technology to businesses.

Generative AI can create new content such as videos, audio, images, text and code. Its recent rise to prominence is thanks to ChatGPT, launched by OpenAI in November. ChatGPT is built on top of OpenAI's GPT-3 large language learning models and uses prompts to write essays, answer questions, create software code and even look for security vulnerabilities within networks.

Businesses are beginning to explore how they can use the technology in their operations. Google and Microsoft expanded their investment into generative AI, while news outlets like Buzzfeed are testing generative AI to put together content. However, despite its capabilities, generative AI has raised concerns around copyright issues and bad actors using the tools to spread misinformation or generate fake comments and reviews.

As the technology becomes more widely used, consumers, businesses and the government "need to ensure the tools are being used responsibly," said Beena Ammanath, executive director of the Global Deloitte AI Institute.

"I can see the current hype around generative AI being a catalyst for further guidance and possibly regulation by government entities," Ammanath said.

Federal regulation of generative AI

Ammanath said there's "no question" that AI regulations are coming. Efforts are underway to understand better how AI works, its impact on consumers, and how to hold AI developers and users accountable for fair and transparent systems.

The U.S. recently released a Blueprint for an AI Bill of Rights, which provides guidance to businesses on the ethical use of AI tools. Meanwhile, the European Union and the United Kingdom are considering AI regulations -- something the U.S. government has yet to touch.

Discussions of restricting generative AI tools specifically are happening in other scenarios. School districts like New York City Public Schools, for instance, are blocking certain generative AI tools on its devices, Ammanath said.

"Lawmakers have proposed regulations on the use of facial recognition and other applications of AI, so it's likely we will also see strategies and regulations emerge around the use of generative AI tools," she said.

Alan Pelz-Sharpe, founder of market analysis firm Deep Analysis, said he believes there's a need for government guidance on the use of AI, particularly generative AI. Indeed, generative AI tools are already facing copyright lawsuits from artists and image stock company Getty Images for the alleged unlawful use of images to generate content.

"The government would do well to guide U.S. businesses toward a safe route in this regard, with guidance on how to ensure defensible use of generated content that doesn't impinge on existing copyright protections and is defensible on any advice such content might offer," he said.

Lawmakers have proposed regulations on the use of facial recognition and other applications of AI, so it's likely we will also see strategies and regulations emerge around the use of generative AI tools.
Beena AmmanathExecutive director, Global Deloitte AI Institute

Another concern with generative AI is its ability to create false information. Gartner analyst Avivah Litan said the federal government has yet to figure out how to address the spread of misinformation on social media sites like Facebook -- generative AI will only increase that problem "to the thousandth power."

Litan said it will also cost businesses to implement software to distinguish what's fake and what's not -- something that's not always possible. ChatGPT built a free tool to detect whether a text was written by AI or a human, which even ChatGPT admits is "not fully reliable."

The problem is technology innovation outpaces federal regulation, Ammanath said. Though government-led efforts will be necessary to ensure fair, unbiased and trustworthy AI systems, businesses developing and using AI tools are "best positioned to regulate themselves" in the meantime.

"Regulations move at the speed of bureaucracy, and AI innovation is only accelerating, putting responsibility for self-regulation squarely in the hands of business leaders," she said.

How businesses should approach generative AI

Generative AI tools substantially impact business operations, including marketing, software, design, entertainment and interpersonal communications. Along with providing new content, summaries, videos and translations, the tools also carry risks like generating false or inaccurate content.

Indeed, Forrester Research analyst Rowan Curran said businesses should look at generative AI as a tool to augment rather than replace existing processes and operations.

When considering generative AI tools, Curran recommended businesses consider where the model is being trained from and on what data sets, as well as the capability of understanding how the model itself functions. The tools should be used with the same amount of "governance, controls, testing and validation" that any other enterprise application or machine learning model needs, Curran said.

There are several generative AI business use cases, he said. Citing Carmax as an example, the company takes customer reviews and uses the large language models from OpenAI via Microsoft Azure to summarize assessment reviews for each car.

"Instead of, as a human, going on their website or any ecommerce website and having to look through dozens, hundreds or thousands of reviews, they're basically taking all of that feedback and putting it into a much more readable format for you as the individual consumer," Curran said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on Risk management and governance