AI-driven attacks seen as chief cloud security threat
Tried and true cloud security threats are on the rise. But according to a new report from Palo Alto Networks, the specter of generative AI threats has organizations concerned.
Fear of AI-powered attacks has some organizations concerned about their cloud security postures, according to a Palo Alto Networks report published Wednesday.
The report, titled "The State of Cloud Native Security," includes the results of a survey of 2,800-plus infosec professionals who self-reported as knowledgeable of their organization's cloud infrastructure or cloud security and were either executive leadership or relevant security, IT or development practitioners.
Some of the most notable findings in the survey involved generative AI, a technology that has continued its ascent in the public consciousness since OpenAI launched ChatGPT to the public in fall 2022. According to the report, 43% of respondents predicted AI-powered threats will evade traditional defense techniques to become a more common threat vector, while 38% of respondents ranked AI-powered attacks as a top cloud security concern.
Perhaps more surprising results were related to the biggest concerns in cloud security from a threat perspective. Risks introduced by AI-generated code came in first place with 44%, while AI-powered attacks in third. Recent research has shown that generative AI tools such as GitHub Copilot can replicate vulnerabilities and insecure code in existing code bases.
Meanwhile, API risks came in second, inadequate access management came in fourth, and unknown/unmanaged assets came in seventh. This is all despite many experts who say generative AI is a nascent tool in the threat landscape, while the listed non-AI issues are resulting in swaths of cyberattacks here and now.
To this point, Palo Alto reported that cloud security incidents were on the rise, with data breaches reported by 64% of organizations. Forty-eight percent of organizations reported an increase in compliance violations, 46% reported having insecure APIs and 45% reported operational downtime caused by misconfiguration.
Amol Mathur, senior vice president and general manager of Prisma Cloud at Palo Alto Networks, told TechTarget Editorial that customers' concerns regarding AI threats speaks to the emerging nature of generative AI technology.
"The sense I get is that they're not saying that these issues are less important than AI threats," Mathur said. "It's more that people are more fearful of things they don't truly understand. When it comes to misconfigurations, identity, vulnerabilities, they have good programs and tools in place to give them the visibility and orchestration and so on. AI is an area that most people just don't understand the potential impact."
The report stated that 100% of surveyed organizations embraced AI-assisted coding to some extent. Mathur said that while generative AI can allow organizations to code and innovate faster, the use of AI in development has already led to vulnerabilities and poorly written code.
IBM and Amazon Web Services published a joint report last Monday at RSA Conference 2024. They found that while 82% of surveyed C-suite executives acknowledged the importance of trustworthy and secure AI, only 24% included a security component in their generative AI-related products.
"The question is, do people understand that just because it's generated by AI doesn't mean that it's secure?" Mathur said. "It's one thing when you're using a model or an app from a very well-established firm out there, though even that could go wrong. But there are also open source LLMs out there. People are using all sorts of stuff. What [is] that LLM trained on? Sure, you can move fast. You can get all this this software out quickly. But what's the risk?"
Mathur said cloud-centric defenders on a whole are getting better at avoiding issues such as identity threats and misconfigurations. But he also said those problems remain common, and there's still a long way to go.
"There are still organizations using fragmented visibility and fragmented tools that don't share context and don't talk to each other," he said. "It has gotten better, but these issues have not gone away. We still see a ton of issues in data, identity and misconfiguration."
Alexander Culafi is a senior information security news writer and podcast host for TechTarget Editorial.