Information Security
- Editor's letterAI in cybersecurity ups your odds against persistent threats
- Cover storyAI in security analytics is the enhancement you need
- InfographicCOVID-19 cybersecurity data shows rising risk during remote pivot
- Feature5 steps to get IoT cybersecurity and third parties in sync
- ColumnCybersecurity for remote workers: Lessons from the front
- ColumnWeighing the future of firewalls in a zero-trust world
fotohansel/stock.adobe.com
AI in security analytics is the enhancement you need
AI-powered analytics is critical to an effective, proactive security strategy. Learn how AI-enabled tools work and what your organization needs to do to reap their benefits.
The volume of cyberthreats is staggering.
One research project put the number of attacks detected during the first quarter of 2020 at 445 million. That figure, from the Q2 2020 Fraud and Abuse Report released by Arkose Labs, represents a 44% increase over the prior quarter -- the highest attack rate ever detected in a quarter. Likewise, others have reported a rising number of attacks, as well as an increasing level of sophistication in those attacks.
Although such statistics can vary, experts generally agreed that both the volume and velocity of cyber attacks today can overwhelm enterprise security teams that aren't operating at peak proficiency and aren't using the latest processes and technologies to help them detect and thwart bad actors from doing damage.
That now means having AI as part of the defense arsenal is a must.
The sophistication, diversity, speed and volume of attacks -- coupled with the growing complexity and reach of an enterprise technology stack that no longer has any real perimeter -- has made conventional manual-based security practices nearly obsolete.
Enterprise security teams -- already stretched thin by a talent shortage in the field -- are having a harder and harder time analyzing all of the incoming threat information and chasing down alerts.
Enterprises need AI in security analytics, plus automation, to keep pace and give them, as analysts, time to focus on the more proactive, high-value tasks that could help put their organizations ahead in the security battle.
"AI plays a huge role in shifting the advantage because it can scan information and detect threats at a scale much greater than a human," said R "Ray" Wang, founder and principal analyst at Constellation Research. AI works on a scale, and with a consistency, pace and breadth of learning, that humans simply cannot match.
AI in enterprise security
Executives see AI as a critical technology that will help them transform many areas of the enterprise.
Deloitte's "State of AI in the Enterprise, 3rd Edition" found that 73% of the 2,737 IT and line-of-business executives polled believe AI is already either critically important or very important to their business today, and 74% said AI will be integrated into all enterprise applications within three years.
Deloitte further found that enterprise leaders are already deploying AI in their security operations, saying their use of AI in security is second only to its use in IT-related functions.
AI in security analytics offers exponentially greater visibility into the activities going on within the organization by more quickly determining which activities are normal and which could be problematic.
Systems embedded with intelligence technologies, including machine learning and natural language processing, analyze data from external sources, such as collective threat intelligence reports, as well as from within the enterprise. The systems use those analyses to search for and identify patterns that fall outside what they've been taught to recognize as acceptable or safe activities.
That's just the start of AI's capabilities and its selling points for enterprise cybersecurity, though. First, these AI systems are trained on huge data sets collected over multiple years from multiple organizations across different industries. Experts said that gives these intelligent systems a breadth of experience that far exceeds what a single enterprise security team can offer.
Cybersecurity pros can then further train an intelligent system to understand the nuances of their enterprise's network and operations, so the system can determine not only what's problematic from a global perspective but what additional activities raise red flags for that particular cybersecurity department.
Furthermore, these intelligent systems continue to learn after being deployed, so they're continually refining and fine-tuning their abilities to distinguish between appropriate and problematic activities and network traffic.
"AI comes into play when the system takes what it has been trained to see as good and bad, what attacks look like, and what it understands of the patterns within the organization and its business, and then detects new patterns," said Elliot Rose, head of digital trust and cybersecurity service at PA Consulting. "AI systems are always learning."
Consider how AI in security analytics can aid in security incident and event management. A typical security team today is processing thousands and thousands of data points, trying to make sure no hackers are sneaking in while also trying to stay informed about emerging threats and vulnerabilities. An intelligent system could not only ingest and learn from real-time data feeds but could use that information to flag a novel threat -- something that is not exactly the same as what it had been trained to detect, but one it finds similar enough to identify as a problem.
"If humans have to go through all those different incidents and say 'yes, yes, no, yes, yes, no,' it produces alert fatigue," said Dimitrios Pavlakis, an ABI Research industry analyst responsible for digital, biometrics and IoT security research. "But AI can go through all this traffic in real time and say, 'Here's what I've learned from the past, so this is good to go, while this is abnormal behavior.'"
Enterprise security teams typically buy intelligent tools, then train them to meet the unique requirements of their own organizations, Pavlakis and other experts said. Only the largest and most security-sensitive entities are building their own intelligence tools from scratch.
Vendors promoting the AI capabilities offered in their products and services include Check Point, CrowdStrike, Darktrace, Fortinet, Palo Alto Networks, Symantec and Webroot.
"We're already seeing AI used to detect anomalies we wouldn't even know to flag as anomalies," said Josephine Wolff, assistant professor of cybersecurity policy with The Fletcher School at Tufts University and author of the June 2020 report "How to Improve Cybersecurity for Artificial Intelligence."
An intelligent system can, for example, help call centers better identify fraud. It could use natural language processing to authenticate a caller or to identify noises, such as clicks, that could indicate a call is part of a scam, said Sanjay Srivastava, chief digital officer with the professional services firm Genpact. Meanwhile, machine learning algorithms could analyze other elements of the call, such as point of origin, to determine whether those elements confirm the caller's identity or raise concerns.
A credit card company could use an intelligent system to track buying patterns across multiple stores. While any individual purchase might not indicate a problem, AI analysis that looks at activity collectively might show a pattern outside the customer’s norm and thus suggest that a credit card is compromised.
AI in security analytics: Promise vs. hype
Demand for intelligent cybersecurity tools is growing, as most organizations are counting on AI to improve their cybersecurity posture.
Consider the findings from the 2019 "Reinventing Cybersecurity with Artificial Intelligence" report from Capgemini, a consulting and professional services firm. Capgemini surveyed 850 senior IT and cybersecurity leaders and found the following:
- 69% believe AI will be necessary to respond to cyber attacks;
- 61% believe AI will be needed to help identify critical threats;
- 64% said AI lowers the cost to detect and respond to breaches;
- 69% said AI provides a higher accuracy in detecting breaches; and
- 60% believe AI results in higher efficiency for the cybersecurity analysts in the organization.
Executives' faith in AI's ability to aid enterprise security is fueling deployments. Capgemini found that only about 20% of organizations were using AI prior to 2019, while 63% said they were planning to deploy AI by 2020.
Meanwhile, Meticulous Research projected in a June 2020 market forecast that the AI cybersecurity market will see a compound annual growth rate of 23.6% for the next seven years, reaching $46.3 billion in 2027.
Capgemini identified in its report five use cases that provided the highest benefits with the lowest implementation complexity, with those use cases being malware detection, intrusion detection, scoring risk in the network for operational technology (OT), fraud detection for IT and user/machine behavioral analysis for the internet of things.
Organizations shouldn't discount the value of AI in security analytics, even if some can't yet quantify how much improvement AI brings to cybersecurity, Tuft’s Wolff said.
"These systems improve over time, so even if organizations aren't currently getting huge rewards from them, the longer they have them, the better they will get, and that's not an insignificant reason to be experimenting with them and trying them out right now," Wolff said.
Preparing enterprise security for AI
Despite the intelligence of this emerging class of cybersecurity systems, experts roundly discouraged the idea that any of them can be dropped into an enterprise and prove effective.
"In true AI nirvana, you want a black box to sit there and not just spot things but actually defend against it," Rose of PA Consulting said. These systems don't quite work that way, at least not yet. But they need to be set up in the right way."
To benefit from AI in security analytics, Lisa O'Connor, managing director of Accenture security and cybersecurity R&D at Accenture Labs, said an enterprise must already have a mature, well-run cybersecurity function.
"You have to be brilliant at the basics before you get to AI," she said. "You have to have a full understanding of your footprint. You have to have accurate log data, and it has to be quality, because we're feeding that data into the model. If it's inaccurate or low-confidence data, you're going to affect the outcome, and you'll miss things. Your model is only as good as your training data."
Other experts agreed. They explained that although intelligent security systems offered by vendors are trained using security-related data, such as threat intelligence reports compiled by multiple sources, the systems must be further fine-tuned using an organization's own information to be most effective.
"You have to be mature to be able to handle AI," Rose said, noting that CISOs must also be committed to training their analysts on intelligent systems and upskilling them so they can address more nuanced alerts.
Dimitrios PavlakisAnalyst, ABI Research
CISOs also need to have AI management strategies in place that ensure the underlying algorithms are not biased, the data used to train them is safeguarded, and the systems are used in an ethical manner.
Furthermore, cybersecurity leaders need mechanisms to monitor system performance to confirm the accuracy of AI's continued learning. Finally, they must have a means to measure performance over time to verify that these systems result in improved security postures for their organizations.
Experts also stressed the need for CISOs to consider, in conjunction with their vendors, a security strategy to safeguard the intelligence itself. "Think about protecting the AI you're using," Rose said. "Because we already see bad actors looking at how to exploit this."
Experts said that as AI matures, its ability to automate relatively simple, repetitive tasks will grow. CISOs should understand how to maximize this potential so analysts can spend more of their time on the highest-value tasks that machines cannot perform. As ABI Research's Pavlakis said, "AI should empower the human users; it shouldn't replace them."