Conference Coverage

Browse Sections

IBM study shows security for GenAI projects is an afterthought

IBM's survey of C-suite executives finds that 82% say trustworthy and secure AI are essential, but only 24% have a security component included in their GenAI projects.


Listen to this article. This audio was generated by AI.

SAN FRANCISCO -- While 82% of C-suite executives acknowledge that trustworthy and secure AI is essential, according to a joint study from IBM and Amazon Web Services, only 24% are accounting for security in their GenAI-related projects.

The study, which was published Monday at RSA Conference 2024, provides the results of a survey of C-suite executives' perspectives on promoting secure applications of AI, and primarily generative AI. IBM noted that just 24% of respondents included a security component in their generative AI initiatives, and "nearly 70% say innovation takes precedence over security."

"While a majority of executives are concerned about unpredictable risks impacting gen AI initiatives, they are not prioritizing security," the report stated.

Although a robust AI threat landscape has not yet taken shape, current use cases include using ChatGPT and similar tools to create phishing email scripts and deepfake audio. IBM X-Force security researchers expect that AI systems will be targeted on a larger scale as part of the technology's maturity.

"While a consolidated AI threat surface is only starting to form, IBM X-Force researchers anticipate that once the industry landscape matures around common technologies and enablement models, threat actors will begin to target these AI systems more broadly," the report read. "Indeed, that convergence is well underway as the market is maturing rapidly, and leading providers are already emerging across hardware, software, and services."

More immediately concerning, according to IBM's report, are companies that don't properly secure the AI models they develop and use as part of their businesses. Poor application of GenAI tools can lead to sensitive data being mishandled or leaked. The report noted that "shadow AI" usage is also growing within organizations, where employees are using GenAI tools that have yet to be approved and secured by enterprise security teams.

To this end, IBM in January announced a framework for securing GenAI. Its tenets include securing data that is centralized and used for training AI models; securing models by scanning for vulnerabilities in development pipelines and enforcing policies and access control; and securing usage against attacks involving live AI models.

In addition, IBM's offensive security team X-Force Red introduced on Monday a new AI testing service that will focus on AI applications, AI models and MLSecOps pipelines via red teaming.

At RSA Conference 2024 on Wednesday, IBM will host a session titled "Innovate Now, Secure Later? Decisions, Decisions ..." discussing how to secure and establish governance for the AI pipeline.

Ryan Dougherty, program director for emerging security technology at IBM Security as well as a presenter for the session, told TechTarget Editorial that securing AI from the start is critical for the technology industry.

"The reason why it becomes so critical when it comes to generative AI is that it's becoming deeply embedded in business applications and business processes," he said. "That integration into the business fabric is what elevates this above and beyond other potential risks and threats."

Dougherty added that securing AI at the ground level is critical from a business perspective too.

"Generative AI is trained and operationalized on a massive amount of sensitive business data, and we need to secure those new crown jewels because that's where the competitive advantage comes in. It's around the data that these organizations have and the insights they're getting by using the generative AI and surfacing it within their applications to make their businesses better," he said.

"They're also spending a tremendous amount of money on the models themselves, which are very expensive," Dougherty noted. "There's a lot of [intellectual property] and investment that are going in to operationalize these generative AI applications, and businesses just simply can't afford not to secure them."

Alexander Culafi is a senior information security news writer and podcast host for TechTarget Editorial.

Dig Deeper on Data security and privacy