Getty Images/iStockphoto

Tip

How AI will transform vulnerability management for the better

Artificial intelligence is improving how enterprises address security vulnerabilities, resulting in stronger security postures and smaller attack surfaces. Learn more.

Poor vulnerability management has historically led to countless cyberattacks and many full-blown data breaches. Still, security teams generally fail to react to new critical vulnerabilities as fast as cybercriminals do, often due to resource constraints.

Increasingly, however, cybersecurity professionals, tools and services can use machine learning (ML) and large language models (LLMs) to improve how they manage security vulnerabilities, finding and addressing them more efficiently and effectively. And they'll need to, as AI-fueled cyberthreats themselves become increasingly formidable.

AI in vulnerability management

The vulnerability management process is a critical but challenging and time-consuming part of any cybersecurity program. Its main functions include the following:

  • To detect potential vulnerabilities before they become entry points for cyberthreats.
  • To assess the level of risk associated with each vulnerability.
  • To prioritize vulnerabilities for mitigation, based on risk.
  • To mitigate the risk either by fixing the vulnerability or by putting some other control in place to prevent its exploitation.

Here's how generative AI systems can help meet these ends.

AI in vulnerability detection

AI has already started transforming vulnerability detection over the past few years.

It has greatly improved the ability of security tools to parse logs and configuration data and detect vulnerabilities such as open network ports, unencrypted network connections and unpatched versions of software carrying known bugs.

Security tools with both ML and LLM capabilities can more easily and effectively pinpoint diffuse webs of vulnerabilities that exist across multiple systems. This means they can detect that a problem on system A combines with a problem on system B to create a vulnerability on adjacent system C.

AI in risk assessment and prioritization

AI tools can also improve IT's ability to assess the security risks associated with a vulnerability by making it easier to do the following:

  • Cross reference data from the CVE list and threat intelligence sources to identify critical vulnerabilities.
  • See when a vulnerability touches critical systems or sensitive data.

This lets enterprises prioritize potential problems and more efficiently focus scarce IT resources to mitigate those vulnerabilities that create the greatest risk to the enterprise.

The vulnerability management process is a critical but challenging and time-consuming part of any cybersecurity program.

AI in vulnerability mitigation

Lastly, AI tools can help deploy mitigation and remediation strategies. AI tools can push out software patches more effectively and suggest changes to security settings and device configurations to close holes. LLM tools can suggest code fixes for script and application code vulnerabilities.

Benefits of AI in vulnerability management

The number one benefit of AI vulnerability management is that it allows IT staff to be more efficient. A single IT staffer wielding well-trained, well-managed AI-powered tools can get significantly more done than one without those resources.

An important part of that efficiency is that AI-powered tools provide both broader and deeper visibility into an environment and enable better decision-making. At the same time, they shield security staff from redundant alarms and alerts. AI can -- and already does -- reduce the rate of false positives that consume valuable human attention, using automation workflows.

Another major benefit is security agility. AI-powered tools accelerate the whole vulnerability management cycle by speeding up detection and identification, risk assessment and mitigation implementations.

Challenges of AI in vulnerability management

One of the biggest obstacles slowing the adoption of AI vulnerability management is cost. The best security tools aren't cheap, usually, and the best AI functionality comes with a price premium. The added expense can be a major challenge to resource-strapped departments, even if it is lower than the cost of the next-best alternative -- adding staff.

AI tools also come with problems unique to the technology. One AI-specific problem is training time -- that is, the time the tool needs to observe the environment so the AI can understand what is normal and what is anomalous. This is not new to current generations of tools, though newer tools might require more training time than older ones to deliver their full measure of value -- presumed to be greater than that of the old tools.

Another related issue is model drift. This is the tendency of AI models to drift away from the behaviors developed during their initial training, thus requiring retraining to resolve.

Then there are the LLM-specific problems of hallucinations and hypnotism. AI hallucinations occur when the model makes up an incorrect response instead of producing the correct answer. It might also produce an "insufficient data; response not available" answer. Hypnotism is the result of internal bad actors instructing the AI to return incorrect or incomplete answers, in this case presumably to hide something or to sabotage network and security operations.

Future of AI-driven vulnerability management

The future of AI-driven vulnerability management is ubiquity: it will be everywhere. As it stands, most tools already incorporate ML functionality. Systems augmented with LLM functionality will likely become predominant within five years.

Moreover, AI-driven vulnerability management will likely reshape the overall future of vulnerability management. The combination of scarce human staffing, proliferating threats, expanding threat surfaces and increasing attention on cybersecurity by federal regulatory and executive agencies means vulnerability management will get more important and tougher to oversee. IT will have to deploy AI-powered tools to meet the challenge.

John Burke is CTO and principal research analyst with Nemertes Research. With nearly two decades of technology experience, he has worked at all levels of IT, including as an end-user support specialist, programmer, system administrator, database specialist, network administrator, network architect and systems architect. His focus areas include AI, cloud, networking, infrastructure, automation and cybersecurity.

Dig Deeper on Threats and vulnerabilities