Blue Planet Studio - stock.adobe

Responsible AI helps reduce ethical, legal risks

Generative AI is opening companies up to new risks. But experts at EmTech Digital argue that incorporating responsible AI into decision making could mitigate exposure.

Ethical issues surrounding the use of artificial intelligence is seeing a resurgence with advancements such as ChatGPT that should serve as a reminder for enterprises to focus on responsible AI use.

That's according to experts at MIT Technology Review's annual EmTech Digital event this week, which included a focus on the legal ramifications of AI and how to incorporate responsible AI into business operations. The Biden administration also focused on responsible AI this week by taking action to promote responsible AI innovation.

As use of generative AI increases, it's particularly important for businesses to pay attention to current lawsuits and developing regulations, said Regina Sam Penti, partner at global law firm Ropes and Gray. AI developers like Stability AI are increasingly facing lawsuits for the use of data in their AI models. Stability AI offers an AI tool called Stable Diffusion that creates images from text, and it bases the images it creates on real artist's work, triggering several copyright lawsuits.

Involving the legal system could result in a slow-down in development and deployment as companies are forced to pause and assess the risk of using data from certain sources, she said.

While it's the AI developers that are facing liability for these issues now, businesses should also approach AI system implementation with caution and pay attention to contract negotiations with AI developers to mitigate risks.

"Nearly all the cases we've seen are targeted at the creators of these systems because they have to deal with the use of data and the training of these models," Penti said during a discussion on the legal ramifications of AI. "If you're out there and creating these systems, you are likely to face some liability, especially if you're using large amounts of data."

Incorporating responsible AI

Enterprises should, from the beginning, focus on responsible AI use cases that align with their core values, said Diya Wynn, senior practice manager of responsible AI at Amazon Web Services, during the conference. She said AWS defines responsible AI as an operating approach that considers people, processes and technology to reduce unintended consequences and improve the AI model's value.

If you're out there and creating these systems, you are likely to face some liability, especially if you're using large amounts of data.
Regina Sam PentiPartner, Ropes and Gray

Focusing on people in the AI operating model is even more important than the technology itself, according to Wynn. When bringing an AI system into a business environment, she said it's crucial to involve training and education to increase awareness and understanding of where risk might exist and how to minimize it.

Wynn said some of the questions businesses need to ask when implementing AI systems are, "Who needs to be involved? How do we take into consideration skilling and upskilling? What do we do in terms of process? Do we have the right governance structure necessary to support our efforts?"

Most of the challenges organizations face when it comes to responsible AI is failing to do the work upfront to make informed decisions about how AI is being used, what data the system has access to, and how it's being trained and tested, Wynn said.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Next Steps

AI lawsuits explained: Who's getting sued?

Dig Deeper on CIO strategy