Getty Images/iStockphoto

Lawyers win in race to generative AI without adequate laws

As Congress drags its feet on passing AI regulations, lawyers are filling the vacuum by helping enterprises navigate court rulings and regulations based on outdated laws.


Listen to this article. This audio was generated by AI.

Enterprises preparing to use generative AI will find a significant share of initial costs going to lawyers.

Courts and regulators can't keep up with the speed at which tech companies are building generative AI into products while promising efficiencies and productivity gains not seen since the introduction of the personal computer and the internet.

"This is as significant as PCs in the '80s, the web in the '90s, mobile in the 2000s and the cloud in the 2010s," Microsoft CEO Satya Nadella said of AI at last month's launch of the company's AI assistant Copilot.

What vendors seldom discuss beyond AI's benefits are its legal risks. Indemnity clauses for generative AI output are nonexistent or weak, and there are no clear guidelines for copyrighting content using generative AI. Also, companies must factor in the uncertainty of pending lawsuits against AI services.

"[AI companies] have created additional legal risks for their customers in terms of getting sued," said Brad Frazer, a partner at law firm Hawley Troxell in Boise, Idaho. "Who's going to profit? The lawyers."

Congress' delay in approving new AI regulations has left enterprises to follow guidelines set by courts and regulators who are making decisions based on outdated laws.

"It's truly the Wild West," said Frazer, a frequent speaker on generative AI. "It's the internet in 1998."

AI indemnification's weaknesses

Adobe, IBM and Microsoft are examples of companies trying to comfort jittery customers with indemnity clauses that would protect them from lawsuits stemming from the output of their generative AI systems.

In June, Adobe promised to pay customers' legal expenses for copyright claims from its AI art-creation tool, Firefly. Microsoft's Copilot Copyright Commitment took effect this week; IBM released a collection of generative AI models last week, saying it would indemnify companies against suits from their output.

But indemnities offer limited protections to customers and are of low risk to providers, legal experts said. For example, Microsoft imposes a list of requirements for customers to use Copilot without violating its indemnity, including using the company's content filters and other mechanisms controlling output.

Such restrictions can make the product "almost ineffectual" for some companies, while increasing the chance of mistakenly violating the indemnification, Frazer said.

AI vendors are also at little risk themselves because the chances of copyright infringement from using their services are minimal, according to Chanley Howell, a partner at law firm Foley & Lardner, based in Milwaukee. Most generative AI systems produce content that comprises bits and pieces from many sources, so the output would less likely be close enough to a single reference to violate copyright, he said.

"There are certainly risks, but I put copyright way down on the list," Howell said.

Nevertheless, AI-generated images and text have sparked lawsuits. Visual artists have sued image-makers DeviantArt, Midjourney and Stability AI, claiming the companies used their copyright-protected art. Getty Images sued Stability AI, accusing it of training its model with Getty-licensed pictures without permission. Writers have sued OpenAI, saying it taught its models on their copyrighted works. The eventual decisions' impact on companies using the services is unknown.

Steps companies can take to reduce the legal risks posed by generative AI.

Protecting enterprise data

Typically, indemnity clauses are nonnegotiable, but enterprises can negotiate on how the AI provider handles sensitive data. The top concern among Foley clients is protecting information on customers, operations and strategies inserted in prompts to generate outputs.

"The risk is more how the AI vendor can use our clients' data," Howell said. "It's not so much the risk of using the output from the AI solution."

For example, setting up a customer-facing chatbot requires handing internal corporate data to the AI provider. Preventing vendors from using the information to train their AI-powering large language models requires an agreement that sets data use conditions.

"What we're seeing is, there's a way to negotiate through those issues," Howell said.

Copyrighting output from generative AI

The U.S. Copyright Office has ruled that someone can't copyright what comes from LLMs because they're software and not human. However, what needs clarification is how much creative content companies must add to the output to produce something worthy of registration.

"That is the real-life use case of the technology," said Morvareed Salehpour, a business and technology lawyer at Los Angeles-based Salehpour Legal. "Very few users are going to take the content, cut and paste it, and use it as is."

A U.S. Supreme Court case that legal experts hoped would clarify the issue involved artist Andy Warhol's use of photographer Lynn Goldsmith's photo of rock star Prince. In May, the court ruled that Warhol's art was not sufficiently different from Goldsmith's to be considered an original work. However, the court did not define the extent of changes necessary to create something new.

Therefore, more precise guidelines will have to wait for additional lawsuits and decisions from regulators. "I would love to see some action from Congress, but I think it's going to be hard to get that because it's hard to get bipartisan support on anything," Salehpour said.

Dealing with generative AI's unknowns

There are steps companies can take to reduce legal risks posed by generative AI, legal experts said. They include the following steps:

You've got to get legal involved from day one. If you don't, you're going to have problems.
Chanley HowellPartner, Foley & Lardner
  • Establish guidelines that employees must follow when using generative AI to avoid violating data privacy or confidentiality agreements.
  • Assign people responsible for vetting all AI-generated outputs for accuracy and bias before using them in business operations.
  • Closely examine indemnities to determine how to operate under their limitations.
  • Avoid LLMs trained on public data by training in-house LLMs on only private data to generate output solely based on corporate information.

At the risk of sounding self-serving, Howell advised enterprises to tap in-house or outside lawyers before using generative AI.

"You've got to get legal involved from day one," Howell said. "If you don't, you're going to have problems."

Antone Gonsalves is networking news director for TechTarget Editorial. He has deep and wide experience in tech journalism. Since the mid-1990s, he has worked for UBM's InformationWeek, TechWeb and Computer Reseller News. He has also written for Ziff Davis' PC Week, IDG's CSOonline and IBTMedia's CruxialCIO, and rounded all of that out by covering startups for Bloomberg News. He started his journalism career at United Press International, working as a reporter and editor in California, Texas, Kansas and Florida. Have a news tip? Please drop him an email.

Dig Deeper on AI business strategies