Getty Images

ChatGPT uses for cybersecurity continue to ramp up

The use of OpenAI's technology in cybersecurity products is growing as companies look to improve threat detection and assist short-staffed and fatigued security teams.

With the release of OpenAI's latest large language model, GPT-4, users have discovered its capabilities and are putting the technology to cybersecurity use, with both positive and negative intentions.

At the RSA Conference 2023, AI was the dominant topic as numerous sessions and panels discussed the emergence of LLMs and how they are already affecting cybersecurity. Many of those discussions focused on malicious use and unintended consequences of LLMs, such as generating disinformation.

Other sessions illuminated ways that OpenAI could be leveraged for nefarious activities by threat actors. One session titled "ChatGPT: A New Generation of Dynamic Machine Based Attacks?" demonstrated the GPT-3.5 model's proficiency in constructing social engineering schemes, writing phishing emails and even obfuscating code.

However, session speaker Greg Day, vice president and global field CISO at Cybereason, said ChatGPT's usage is a double-edged sword.

"What I think we have to expect is more attacks, more code reuse, more creative ways of using ChatGPT to use that, but I want to balance that because it can be used in a good way for pen testing," Day said.

Alongside Day was speaker Paul Vann, a fourth-year student at the University of Virginia, who explained that the newest ChatGPT model is more efficient than version 3.5 in generating code, with better explanations for its decisions. He also described a tool called OpenAI Playground that IT professionals can use to leverage these advancements in their defense practices.

OpenAI Playground provides a platform for users to interact with different kinds of LLMs, such as chat models and fine-tuning models, allowing them to test different permutations of their own OpenAI models.

"What's really interesting about this is it kind of gives you an opportunity to build models that you can integrate with your products and actually fine-tune to focus on cybersecurity purposes," Vann said.

In recent weeks, technology companies have integrated OpenAI into their products in numerous ways across infosec. Some of the latest applications of AI include analyzing threat intelligence and relieving cybersecurity personnel of low-risk and redundant tasks, with the goal of potentially relieving short-staffed and fatigued security teams.

AI use in cybersecurity practices

Microsoft's new Security Copilot, powered by the company's own security-based GPT-4 model, collates data from verified and trusted sources in addition to Microsoft Defender Threat Intelligence and Microsoft Sentinel to aid users in incident response, threat hunting and security reporting.

Only available to professional security teams, the tool is intended to improve security operations center analysts' efficiency.

"We think there's a lot of opportunity to help security professionals get clarity, catch what others miss and learn more about what they need to do to be better security professionals," said Chang Kawaguchi, vice president and AI security architect at Microsoft.

According to Kawaguchi, the tool takes the normal threat detection process -- submitting queries, data analysis and decision-making upon incident conclusion -- and augments the volume of skills many users might lack due to the current cybersecurity personnel deficit.

Security professionals can use Copilot to reverse-engineer a script to dismantle malware, understand the maliciousness of the code and determine what activities were involved in an incident.

Moreover, users can derive prospective containment methods for an incident and use results from origin analysis to detect how the malware materialized, through a graphical mapping of the attack sequence.

"This is a great example of providing some skill, some capability that the individual might not have and helping with the giant security skills gap we've gotten," Kawaguchi said. "There are more than 3.5 million unfilled security positions, and we have to find a way to help organizations both fill those positions with folks and make them higher skill."

Research firm Gartner predicted that as a result of field-related job stressors, almost half of cybersecurity leaders will change job positions by 2025, and 25% will shift to alternate disciplines. Like Microsoft, Israeli startup Skyhawk Security has also used AI with intentions to squash difficulties induced by the talent shortage.

Last month, the company integrated ChatGPT into its cloud threat detection and response platform. According to Chen Burshan, CEO at Skyhawk Security, adding another layer of machine learning to its security analysis flow has relieved responders of alert fatigue from incoming alerts on every single threat detected. Using ChatGPT ensures responders only receive alerts that are severe enough to analyze, he said.

"ChatGPT basically added another layer that allowed us to be more accurate in what we send to customers and also allow them to analyze it more quickly," Burshan said.

Skyhawk's classical machine learning framework uses malicious behavior indicators to detect real threats. These events are compiled into a single attack storyline and scored on their severity. ChatGPT is now a part of the company's scoring mechanism.

Trained on millions of security points from across the web, the chatbot can read an attack sequence in general terms and yield a maliciousness score of the incident as well as an explanation for the given score. If the chatbot gives a score high enough for concern, the incident is then sent to incident responders to analyze.

In addition to optimizing the efforts of IT experts, the company is using the chatbot to accelerate the delivery of alerts. Skyhawk researchers ran tests that revealed that using the ChatGPT API to score the severity of threats quickened breach detections, with 78% of cases producing alerts earlier than without the AI technology.

"The whole process of understanding, reacting, responding, blocking and eventually also describing what happened gets much faster," said Amir Shachar, director of data science at Skyhawk. "That's kind of how we help customers: easier detection."

Burshan said that with the efficiency Skyhawk researchers have discovered in implementing ChatGPT, he predicts that the chatbot will be adopted by more companies with additional security use cases.

"I think the industry will try to use ChatGPT more and more in real detection," Burshan said. "The technology can be for different functions and different security functionalities."

AI's future role in cybersecurity

In September, market research firm IDC predicted that worldwide spending on AI systems would surpass $300 billion by 2026. The world is quickly evolving around AI and machine learning systems, which have already been deployed in many cybersecurity products for years.

But as ChatGPT models might dip further into defense settings, their functionality remains limited -- for now.

"AI can only be part of a defense and not its entirety," said Sean Gallagher, principal threat researcher at Sophos. "So, 'trust' in AI, at least for now, has to be limited to trust in it as part of a multilayered defense and as an aid to defenders in spotting potential malicious activity or content."

Although integrating versions of ChatGPT into cybersecurity tools can make processes more efficient, Gallagher said doing so requires several levels of human oversight.

Integrating AI into cybersecurity practices should assuage some pressures of IT analysts' workload, but it might also mean additional analysis to ensure safety of the technology. Still, AI will most likely make a slow entry into threat intelligence environments.

"We're a long way from 'lights out' security," Gallagher said.

Alexis Zacharakos is a student studying journalism and criminal justice at Northeastern University in Boston.

Dig Deeper on Security analytics and automation