Getty Images

RSAC panel warns AI poses unintended security consequences

A panel of experts at RSA conference 2023 warned of hallucinations and inherent biases but also said generative AI can assist in incident response and other security needs.

SAN FRANCISCO -- While a panel of experts at RSA Conference 2023 touted generative AI for a host of security uses, including incident response, they also warned the rapid adoption of the technology will present unintended consequences, particularly around the spread of disinformation.

Ram Shankar Siva Kumar, data scientist in Azure Security at Microsoft, moderated a panel titled "Security as Part of Responsible AI: At Home or At Odds" on Tuesday during the conference. Panelists Vijay Bolina, CISO at Google DeepMind; Rumman Chowdhury, founder of Bias Buccaneers; and Daniel Rohrer, vice president of software security at Nvidia joined him. The discussion addressed if and how security can keep pace with the whirlwind of large language model (LLM) use that OpenAI's ChatGPT sparked beginning in November.

The panelists emphasized how the rapid adoption of AI has even affected the discussion points of their panel because so much has changed in the space over the last six months. However, one of the consistent main challenges with responsible AI use was possible unintended consequences.

Chowdhury defined unintended consequences as well-meaning people who accidentally implement bad elements as opposed to malicious attackers. The difference between the two matters, she said, because it affects the approach to solving the problem.

"In one case, you're looking for people who create bots or spread disinformation intentionally. And then there are people who spread it unintentionally because they believe it. Both need to be resolved," Chowdhury said. "People can make deepfakes that are malicious, but if no one shares them, it doesn't have a big impact."

She attributed the heightened use of generative AI to enterprises' needs for critical thinking and fast analysis, which the technology does address. On the other hand, the panelists emphasized their concerns, such as the potential for job loss in particular fields, inherent bias and even "hallucinations."

Hallucinations occur when an LLM provides responses that are inaccurate responses or not based in facts. This can lead to the spread of mass disinformation, Chowdhury warned.

"At Twitter, we worried a lot about election disinformation. AI may take that and amp it up. We have a politically contentious situation in a world in which it is very simple to make and spread disinformation at scale," she said.

Ryan Kovar, distinguished security strategist at Splunk, attended the panel Tuesday and said the hallucination aspect rings true. When Kovar asked ChatGPT to produce a summary on himself, he received a mix of work published by him as well as a colleague. The AI conflated his and his colleague's work all the way through.

"Another problem is AI, it doesn't lie, but it infers. You have to be more specific and get it down to what you're actually asking," Kovar said. "Still, the only people who are going to lose are the ones who don't adapt to AI."

To curb these problems, Rohrer said there needs to be a focus on building tools to manage those unintended consequences, and problems need to be addressed with a systems-based approach. He believes there's currently an overemphasis on the model itself rather than the embedded system.

Since the onset of increased AI use, Rohrer has dealt with the legal and ethics teams more than ever.

"What I learned in those conversations is that a lot of the things we do and the way we think in security applies very well -- looking at risks versus harms. It's the same things we want to do here," Rohrer said.

Similarly, Chowdhury referred to generative AI as the merging of ethics and security. It's similar, she said, but not the same.

Rumman Chowdhury, founder of Bias Buccaneers, discusses the risks posed by AI at RSA Conference 2023.
During an RSA Conference panel on AI, Rumman Chowdhury of Bias Buccaneers said unintended consequences of AI, such as hallucinations, can pose serious issues for users and organizations.

How AI helps

Just as AI use presents problems, it's also been implemented in various ways to improve enterprises' defensive postures. New products incorporate AI to gather and analyze threat intelligence as well as remediate vulnerability risks. The panelists noted it's also been helpful for red team training.

Bolina, who formerly worked at Mandiant, said it will help with incident response (IR) as well. "As a former IR, I think if anything, it will make our jobs more efficient and more creative. We'll be able to do things much faster."

Separately, Jen Miller-Osborn, director of threat intelligence at Palo Alto Networks' Unit 42, agreed that AI and LLM for IR is helpful. She said she's observed its benefit to security operations teams as well as within Palo Alto Networks. The vendor has been using AI and machine learning tools for some time to automate IR aspects, especially low-level needs.

"We're able to save our people for the actual incidents where you need people," Miller-Osborn said. "We've taken our IR time down from 40 days to one minute because machines can do that. They can make those determinations much faster than people. Especially when you have a platform of things, it can make ties from the firewall to the endpoint to the logs we're seeing."

Arielle Waldman is a Boston-based reporter covering enterprise security news.

Dig Deeper on Security analytics and automation