putilov_denis - stock.adobe.com
What generative AI's rise means for the cybersecurity industry
ChatGPT's moment in cybersecurity is significant for both technological and marketing reasons. Security analysts and experts have their own reasons why.
ChatGPT's cultural and economic ascent in recent months has led to an interest in generative AI as a whole, and that moment has included cybersecurity. However, experts differ on whether the moment is more steeped in marketing or emerging technology.
ChatGPT, developed and published by research firm OpenAI, is considered a large language model (LLM), a type of AI model used to generate text. LLMs are in and of themselves a type of generative AI, an emerging branch of artificial intelligence in which models are used to create content such as images, audio or text through massive amounts of training data -- for example, OpenAI's image generator Dall-E.
The immense popularity of ChatGPT was no doubt assisted by the Microsoft's announced multibillion-dollar investment into OpenAI last fall, which led to the chatbot's integration with the software giant's Bing search engine. In the wake of that investment, a number of "AI-powered" products have entered the market in the last six months. For example, generative AI was the unofficial theme at RSA Conference 2023 in April, as many vendors had AI-powered offerings to pitch.
Several cybersecurity vendors at the conference said they had been using AI and machine learning for years. The extremely broad concept of artificial intelligence has been integrated in various forms for decades, and some vendors have been building advanced datasets for years.
But generative AI is on the rise, though experts were divided about what has led to led to this moment. Some said it was the result of marketing more than actual technology advancements, and others said that generative AI like ChatGPT is leading to a watershed moment.
Generative AI in cybersecurity
OpenAI declined a request for an interview. In its place, TechTarget Editorial asked the public-facing research preview of ChatGPT about how cybersecurity professions use ChatGPT (under the prompt "How do cybersecurity professionals use ChatGPT?").
The chatbot replied with several examples, such as security policy and security awareness training documents; vulnerability assessments, including performing scans, interpreting reports and suggesting remediation; threat hunting, which includes parsing through logs, identifying patterns and detecting indicators of compromise; and threat intelligence analysis, such as simplifying reports down to relevant data and quickly gathering insights from security advisories and online forums.
"It is important to note that while ChatGPT can provide valuable assistance, cybersecurity professionals should exercise caution and apply their expertise," the chatbot's reply read. "They should critically evaluate the information provided by ChatGPT and verify its accuracy using reliable sources and established security practices."
Asked about generative AI as a whole (under the prompt, "How about generative AI as a whole (not specifically ChatGPT?"), ChatGPT mentioned multiple additional use cases, such as malware analysis, password generation and cracking, and red teaming (creating realistic phishing emails and "generating synthetic attack traffic").
Though numerous solutions involving generative AI have launched in recent months, two of the most public have belonged to tech giants Google and IBM, both of which launched products at RSA Conference 2023.
IBM launched QRadar Suite, which paired new versions of IBM's QRadar security products with an generative AI-powered interface. Google announced Google Cloud Security AI Workbench. Both services use generative AI for services such as automatic threat hunting and prioritized breach alerts, though there are differences as well.
The applications for generative AI in cybersecurity are wide, though it's unclear at this early stage how effective the technology will be. Chris Steffen, vice president of research, security and risk management at analyst firm Enterprise Management Associates, said that if a non-security-oriented organization received a vulnerability report for a flaw relevant to the organization, a chatbot could translate technical data from the report for an upstream executive that might not have the same security knowledge as the organization's CISO.
John Olstik, an analyst at TechTarget's Enterprise Strategy Group, referred to ChatGPT as a "helper app" that threat analysts can use to ask about specific threat actors or tactics, techniques and procedures. He said it can also write detection rules or reverse engineer malware.
Vladislav Tushkanov, lead data scientist at Kaspersky Lab, said that although there are a number of benefits, current technical limitations mean many experts and vendors are in the experimentation phase with tools like ChatGPT. At least currently, "the impact does not seem to be high."
"LLMs still suffer from many limitations, such as their propensity to hallucinate and confidently express completely false information," he said. "For this reason, it is too early to apply LLMs to actual cybersecurity tasks that require precision, speed and reliability. They can, however, be used to summarize data and present it in a more convenient way, and we can see more such solutions in the future."
Ketaki Borade, senior analyst of infrastructure security at analyst firm Omdia, similarly said generative AI is "finding its place" for process automation but is not replacing human work wholesale.
"At some point, verification by humans is still necessary even in AI-automated tools," she said.
Real tech versus marketing buzz
Steffen said he felt "50%-60%" of the hype behind generative AI was based in marketing, while "15%-20%" of vendors were using the technology to do interesting things.
"I see all these advancements as iterative advancements. I don't see them as groundbreaking," he said. "I don't think there's anybody that can realistically say AI hasn't been a mainstay -- or at least creeping into the security space -- really from the very get-go."
But despite the lean toward marketing, he said the push behind AI gives is helping organizations use these emerging tools "with more confidence in being able to sleep at night."
Chris SteffenVice president of research, security and risk management; Enterprise Management Associates
"I think it's important for our security leaders to come out and say that it's okay to trust some of this AI stuff," Steffen said. "I think it's important that we start taking and offloading some of these tasks to AI when it's appropriate and, obviously, with some human security review. But I think that's a step in the right direction."
John Dwyer, head of research at IBM X-Force, told TechTarget Editorial at RSA Conference that he similarly felt AI's moment reflected an acceptance of AI's place within the enterprise more than any specific technological breakthrough.
Olstik said there is "tremendous momentum" behind machine learning and related concepts, such as behavioral analytics, and that this will only continue.
"With generative AI in products, we are really talking about future use cases," Olstik said. "Security pros are skeptical by nature, and many will take a cautious approach. But it's likely that security teams will be overwhelmed by products and capabilities soon. The key for now is AI governance, policies, policy enforcement and monitoring. In other words, CISOs should be working with other executives to put in the appropriate guardrails in place before the tsunami hits."
Borade said the technology is still in an experimental phase but recommended vendors reject the impulse to "watch and see." Vendors should be working toward AI security now, she said, as it's "just a matter of who gets there first."
The technology appears to be taking hold of the industry in more ways than one. At RSA Conference, Borade noticed a trend of professionals discussing how to best write a prompt query to get optimum output from ChatGPT, and that "it was said there would be a new job title coming up called 'prompt engineer.'"
Defining AI's moment in security
Steffen said he was "glass half full" about ChatGPT's big moment and predicted the companies that embrace generative AI will emerge as innovators.
"I don't see ChatGPT as a negative," he said. "I think those vendors that are looking to increase their use of the various AI technologies are only going to be leaders in the long run. And the companies using those vendors that implement those technologies are going to be leaders in their particular industries."
AI's rise includes the opportunity for threat actors. Threat actors have used deep fakes in spearfishing efforts, such as to impersonate a celebrity. Tushkanov said Kaspersky experts have found "a variety of offers" on the darknet to create videos on demand. As for chatbots and text-generating models, he said there is potential for misuse -- such as phishing emails or creating malicious code -- but it has not changed the threat landscape much.
Sophos field CTO of applied research Chester Wisniewski told TechTarget Editorial that the reason there's so much hype behind generative AI is that while there is significant discussion about how it's used by threat actors, "there is so much upside and opportunity" for defenders.
"I'm much less worried about the malicious stuff and much more interested in what the good guys are doing," he said. "Because this technology is not easy to train. It's not cheap to do. It's not something that the criminals are going to bother with because what they're doing already works."
AI, he said, could be part of that solution.
"We need to do something better, because clearly we're not protecting people well enough as an industry, and we're trying to come up with ways to enable people to be better protected," Wisniewski said. "There's an enormous opportunity for us to use these things to do that."
Alexander Culafi is a writer, journalist and podcaster based in Boston.