Modern Stack
- Editor's letterQuantum supremacy and the path to encryption chaos
- Cover storyAI and machine learning take on cloud security woes
- InfographicDevOps metrics point to fast releases, open source adoption
- FeatureIn Kubernetes management, infrastructure work is tricky
- ColumnInvest in an IT infrastructure upgrade -- or keep fixing it
- ColumnCloud and containers rewrite monitoring and management rulebooks
AI and machine learning take on cloud security woes
AI-infused technologies continue to target almost all aspects of cloud monitoring and management, including security. But don't assume they're a silver bullet.
AI and machine learning won't magically solve all of the IT world's security problems, despite what some of the hype might suggest. Used sparingly, though, these technologies can make a security team's life much easier when operating at scale.
AI and machine learning are the buzzwords du jour in IT, and security sometimes gets pulled into those marketing messages. A flood of products that tout their algorithmic prowess have hit the market, some more dubious than others. But if enterprises can look past the space-age sales pitches and claims of security utopia, there is the potential to alleviate some of the more mundane aspects of cloud security.
Machine learning involves the collection of data, the extraction of particular features within that data to train models, and the application and constant tweaks to those models to ensure the desired outcomes. Enterprises use those techniques today, but in security, it works best when the number of dimensions is relatively small. It can be particularly beneficial in automation, which is critical to successful cloud deployments at scale.
Photo-printing retailer Shutterfly primarily works with AWS as it transitions away from a local data center. The biggest challenge has been AWS identity and access management (IAM), which is the most complex piece to the cloud model. It also differs greatly from the controls used in private data centers, said Aaron Peck, Shutterfly's CISO.
"The containers, the Lambda [functions] -- there are instructions on how to do that," Peck said. "There are no instructions to manage IAM at scale, because IAM at scale means something very different for every company."
Peck used the example of a company that has 100 users and 200 different groups. Within AWS, there are more than 100 services and 4,000 roles across those services, which could translate to around 100,000 policy decisions if that company implements best practices around least privileges.
"It's unrealistic to do that at scale without any automation or help from machine learning or AI to make those decisions," Peck said.
Shutterfly had AWS CloudTrail logs sent directly to its Splunk deployment, but eventually turned to Avid Secure to break out visibility and compliance checks. This helps to monitor for any bumps in the CI/CD pipeline. Avid Secure is among a growing number of startups that use machine learning on a limited and targeted basis to ensure compliance.
New development models require new security approaches
The emergence of AI and machine learning in security coincides with a broader shift in software development. Companies that build microservices have pushed for greater collaboration between developers and operations teams, and security enters the process earlier, in what is known as DevSecOps.
"The cloud delivery models now rely on people doing distributed work," said Fernando Montenegro, an analyst at 451 Research. "You have DevOps teams, and the role for security becomes much more about building those guardrails and letting people run free within them."
The CI/CD pipeline means developers may push out code every day, which renders the older techniques of lengthy, manual security review obsolete. And, in an API-driven world, far more data is generated now than when private data centers handled everything. This volume creates the need for greater automation.
Hyperscale clouds and third parties get in on AI
All the major cloud providers have added some degree of AI to their security services, but two of the most prominent examples come from AWS' security portfolio. Amazon Macie discovers, classifies and protects sensitive data, while Amazon GuardDuty adds continuous monitoring to protect against unauthorized actions.
While AWS continues to add functionality, these tools have already earned praise for their ability to work in the background and add layers of protection. These types of services are ideal for hyperscale cloud providers because of the vast sums of data they have across their networks that make it easier to spot patterns of malicious behavior.
Still, there are limitations. Amazon Macie and GuardDuty, for example, only work natively. Since most organizations spread applications across multiple public clouds and private data centers, they may forgo any tool that doesn't work in most or all environments.
So far, AI and machine learning have largely been confined to localized environments. To be truly effective, however, these tools must move beyond those parameters, said Abhi Dugar, an IDC analyst.
"That's when you start looking across data centers or across availability zones, and the types of threat that might come that are not thought about yet or considered yet," Dugar said.
Also, it appears that AWS and other public cloud providers are reticent to dive too deeply into application-level security. The shared-responsibility model -- pioneered by AWS -- draws lines that dictate where a vendor's duties end and where a customer's begin. Users have access to a flood of logging data that they can use to track workloads internally, or they could turn to the bevy of third-party vendors to handle that for them.
Some third-party security vendors are wary of overselling the role of machine learning in cloud security. They say it is best left to binary decisions, with successful implementations already found in malware detection or with scans for sensitive information.
Trend Micro has used machine learning internally for years, but only where it makes sense, said Mark Nunnikhoven, vice president of cloud research for the security vendor. Oftentimes, it is less costly and time-consuming to run a simpler statistical analysis rather than go through the trouble to train and install AI models.
"It can be computationally very expensive," Nunnikhoven said. "When you need to determine if an action is good or bad, there might be better ways."
And bad actors are good at what they do. They often find innovative ways to work around these algorithms to avoid being spotted, said Daniel Clayton, senior director of cybersecurity and operations at Rackspace. That's why the use of AI for behavioral analysis is still a ways off, and why security analysts will still play a vital role in threat identification and response.
"There's this push to solve security, which is a pretty big problem, with a silver-bullet tool. And it just doesn't exist," Clayton said.
Dig Deeper on Cloud app development and management
-
AWS & GenAI: A Computer Weekly Downtime Upload podcast
-
How one CIO drives business transformation through tech
-
3M HIS Taps AWS Generative AI to Advance Ambient Clinical Documentation
-
Leveraging Data Analytics, Clinical Intelligence to Bolster Perioperative Care