3 ways AI is transforming cloud security, according to experts
Generative AI only recently burst into the collective consciousness, but experts say it is already changing cloud security -- on both the defensive and offensive sides.
AI appears poised to revolutionize cybersecurity, with changes already happening on the ground -- and in the cloud.
In a recent survey by the Cloud Security Alliance (CSA) and Google Cloud, 67% of IT and security professionals said they have started testing generative AI (GenAI) capabilities for security use cases, with another 27% in the planning phase. Just 6% of respondents said they have no current plans to explore AI for security.
Experts say GenAI will increasingly augment cybersecurity operations, offering guidance and assistance to human practitioners to help them make better and more informed decisions. "That's especially relevant in cloud because cloud is complicated, dynamic and changes constantly," said Charlie Winckless, analyst at Gartner. "Staying on top of all of that is a problem."
It's a problem AI and machine learning (ML) promise to help solve, with natural language queries and responses already becoming a "standard staple" in cloud security tools, according to Andras Cser, analyst at Forrester.
The ability to ask a large language model (LLM) a question and receive a straightforward answer -- based on massive amounts of complex technical data, which AI models can process at speed -- is a potential game changer. Rather than sifting through the data themselves, practitioners can theoretically validate their decisions and harden an organization's security posture much more quickly and easily.
"Instead of having to really dig in and understand the details, we can ask natural language questions to sort through the noise of these tools more effectively and understand what's really happening," Winckless said.
Charlie WincklessAnalyst, Gartner
Caleb Sima, chair of CSA's AI Safety Initiative, predicted AI will eventually autonomously construct and oversee cloud infrastructure and pipelines, automatically integrating sophisticated security controls to minimize the attack surface. In the short term, he added, AI-driven tools are already simplifying the cloud engineer's role by easing longstanding cloud security pain points.
3 key AI cloud security use cases
Key cloud security use cases for GenAI, according to experts, include the following.
1. Misconfiguration detection and remediation
Cloud misconfigurations pose one of the most serious security risks enterprises face, according to the CSA, National Security Agency, European Union and others.
In complicated cloud environments, settings and permissions errors perennially abound, opening the door to cyberattacks and the exposure of sensitive data. "At the end of the day, misconfigurations are behind a host of security breaches," Sima said.
Manually identifying and troubleshooting every cloud misconfiguration is time-consuming and tedious, if not impossible. AI tools can automatically analyze infrastructure and systems to detect anomalies and misconfigurations and then fix them. "They can automate remediation far faster and more efficiently than people can," Sima added.
In most cases today, however, AI tools likely suggest policy or configuration changes to human operators, who then approve or reject them, according to Winckless. While GenAI technology might be capable of independently remediating vulnerabilities without human intervention, it remains rare that security programs allow it to do so in real-world cloud environments.
"Most organizations are still unwilling to automate changes in development and production," Winckless said. "That has to change at some point, but it's about trust. It will take years." For the foreseeable future, he added, human oversight and validation of AI remain important and advisable.
2. User behavior analysis
Cser said he expects to see GenAI improve detection capabilities in cloud security, with the technology able to process huge data sets and identify unusual access patterns that human operators otherwise miss.
"AI will be able to take security teams on a deep dive into user behavior by contextualizing activities within the broader context of cloud environments," Sima agreed. AI algorithms will become increasingly good at recognizing abnormal behavior and alerting teams to potential security incidents, he added, based on factors such as the following:
- User roles.
- Access privileges.
- Device characteristics.
- Network traffic patterns.
Ultimately, Sima predicted, AI will not only be capable of accurately anticipating current user behavior, but future behavioral trends as well. "When taking this in total, we'll see AI being used to shape adaptive security policies and controls and assign risk scores to individual behaviors," he said.
3. Threat detection and response
Experts also anticipate GenAI will help security teams identify malware and other active cyberthreats much faster and more accurately than human practitioners can on their own by analyzing the real-time environment and cross-referencing it with threat intelligence data.
Already, GenAI-based investigation copilots are aiding security teams' threat response efforts, according to Cser, by recommending proactive measures based on activity patterns.
AI cloud security threats
Advancements in AI technology will also change the threat landscape, with increasingly sophisticated, AI-based attacks all but inevitable, according to experts. "Threat actors will be able to leverage AI algorithms to launch highly adaptive attacks and evasion techniques," Sima said.
This would be cause for greater concern, except that research indicated the vast majority of organizations are moving quickly to invest in defensive AI capabilities. "We can assume companies are already anticipating how to best use AI to stay one step ahead of threat actors," Sima said. He added, however, that organizations need to continually prioritize AI security investments going forward if they are to gain and maintain the upper hand.
In other words, the endless game of whack-a-mole in which defenders and attackers have long engaged appears likely to continue -- albeit heightened by GenAI and ML.
Getting started with AI-driven cloud security
Many cloud security vendors are building GenAI capabilities directly into their existing tools and platforms. That means all but the largest hyperscale organizations needn't -- and shouldn't -- worry about building their own AI models, according to Winckless.
But just because a provider rolls out GenAI capabilities doesn't mean they are infallible, or even necessarily ready for prime time. For example, users might encounter challenges such as AI hallucinations, in which an LLM produces erroneous information, which could be catastrophic in cybersecurity.
"Look at what frameworks your provider is using for generative AI and if they're providing any validation or verification of inputs and outputs," Winckless advised. "This is still an emerging space. It's very exciting, but it's also very challenging to determine how well the technology is being used."
Alissa Irei is senior site editor of TechTarget Security.