TechTarget.com/searchenterpriseai

https://www.techtarget.com/searchenterpriseai/podcast/The-AI-market-does-not-understand-AI-safety

The AI market does not understand AI safety

By Esther Shittu

With the advancement of generative AI, a key goal for enterprises is making sure that the AI systems they use are safe and responsible.

Many times, though, when AI vendors discuss responsible AI, they combine it with safe AI, which some AI experts say is not the same. This confusion about the difference between responsible AI and safe AI can sometimes give enterprises a false sense that the AI systems they're deploying are safe when they are not.

According to Stuart Battersby, CTO of AI safety vendor Chatterbox Labs, responsible AI often refers to AI governance. When discussing responsible AI, vendors are looking at making sure that AI systems benefit users and do not cause harm that might lead to ethical or legal problems.

"It might include policies and principles about how you treat AI," Battersby said on the Targeting AI podcast from Informa TechTarget. "You've got some solutions for AI governance, which typically are workflow things. It may decide who in the organization has a sign-off in the AI project or whether we have the right permissions to go forward with this project, with this AI use case."

This is different from AI safety, which looks at whether the AI system produces harmful content, whether the controls and safety layers are adequate, or whether there is bias, Battersby continued. AI safety is assessing how the systems respond to inquiries, and sometimes involves the AI creator preventing the AI system from responding to certain inquiries.

He added that enterprises often feel that an AI model is good to use just because it has responsible AI built in. That is not always true.

For example, when Chatterbox tested the DeepSeek-R1 model, the model failed all safety checks. Similarly, some reasoning exercises with Google's Gemini Flash and OpenAI o1 also failed safety tests.

"It's no good having the fastest, most effective model if there's no way for it to be adopted into an organization because it's too risky," Battersby said.

Also during the podcast, Danny Coleman, CEO at Chatterbox Labs, said AI safety can be a critical bottleneck when considering the adoption of AI models.

For example, in heavily regulated industries, once AI projects are approved and have gone through governance, they face challenges because of a lack of safety testing, Coleman said.

"Unless these systems are proven to be safe, secure, robust and tested, how will we ever move more into production?" he said. "It's important that all stakeholders understand the role that they have to play in making sure AI systems are safe."

Esther Shittu is an Informa TechTarget news writer and podcast host covering AI software and systems. Shaun Sutner is senior news director for Informa TechTarget's information management team, driving coverage of AI, analytics and data management technologies, and big tech and federal regulation. Together, they host the Targeting AI podcast series.

22 Apr 2025

All Rights Reserved, Copyright 2018 - 2025, TechTarget | Read our Privacy Statement