Alex - stock.adobe.com

It's time to harden AI and ML for cybersecurity

An RSA Conference panel said that now is the time to become proactive against AI and ML adversarial attacks -- before they become more sophisticated.

AI and machine learning are hot topics in the technology industry, especially as ChatGPT and other generative AI take over headlines. So, it's no surprise AI and ML were featured heavily at RSA Conference 2023.

One session, "Hardening AI/ML Systems -- The Next Frontier of Cybersecurity," featured a panel discussion about why now is the time to address protecting AI and ML from malicious actors.

Moderator Bryan Vorndran, assistant director of the cyber division at the FBI, explained that, as organizations integrate AI and ML into core business functions, they increase their attack surface. "Attacks can occur at every stage of the AI and ML development and deployment cycles," he said. "Models, training data and APIs could all be targeted."

One problem, he said, is that hardening AI and ML from attacks isn't at the forefront of the development team's mind.

"It's very important that everyone who is thinking of internal development, procurement or adoption of AI systems does so with an additional layer of risk mitigation or risk management," said Bob Lawton, chief of mission capabilities at the Office of the Director of National Intelligence.

Plus, the security industry is still trying to figure out the best ways to secure AI and ML.

Current attacks use low levels of sophistication

Current AI adversarial attacks aren't overly complex, the panel agreed. Christina Liaghati, manager of AI strategy execution and operations at Mitre, and Neil Serebryany, CEO of CalypsoAI, explained that most attacks today aren't much more than malicious actors poking at AI and ML systems until they break.

The Chinese State Taxation Administration, for example, suffered an attack where malicious actors took advantage of facial recognition models to steal nearly $77 million. The attackers used high-definition images of faces purchased off the black market and AI to create videos that made it appear like the photos were blinking and moving to fool the facial recognition software.

AI adversarial attacks will evolve, Liaghati warned. But attackers don't have reason to evolve yet with the consistent success of low-level attacks. Once the cybersecurity industry begins to implement proper AI security and assurance practices, however, this will change.

How to mitigate AI and ML attacks

AI adversarial attacks will never be fully preventable, but their effects can be mitigated. Serebryany suggested first using simpler models. If you can use a linear regression model over a neural network, for example, do so. "The smaller the model, the smaller the attack surface. The smaller the attack surface, the easier to secure it," he said.

From there, organizations should have data lineage and understand the data they're using to train the AI and ML models, Serebryany said. Also, invest in tools and products to test and monitor the AI and ML models while deploying them into production.

Mitigation and hardening techniques don't even need to be sophisticated either, Liaghati said, or done separately from normal cybersecurity practices. She suggested organizations think about the amount of information they're releasing publicly about the models and data used for AI and ML. Not revealing what you're doing, she said, makes it harder for malicious actors to know how to attack your AI and ML models in the first place.

Early days for AI and ML attacks

AI adversarial attacks are only just beginning, the panel stressed. "We're aware of the fact that there is a threat, and we're seeing early incidents of it. But the threat is not full-blown just yet," Serebryany said. "We have this unique opportunity to really focus on building mitigations and a culture of understanding for the next generation of adversarial ML risk."

Just as attackers are discovering how to exploit AI and ML, organizations are figuring out the pros and cons of using AI and ML in their daily operations and how to harden them. The panel recommended organizations spend time learning and understanding potential cybersecurity issues with their specific uses of the technologies and then define their posture and any solutions that solve those problems.

The infosec community should be proactive, too, which involves developing partnerships. Lawton talked about the federal government and the intelligence community working together on AI and ML cybersecurity. The goal is to build a network of developers and practitioners to build out AI and ML security capabilities now rather than later.

"We need to share our ground truth on the data, on what is actually happening, and on those tools and techniques that we can share across the community to actually do something about it," Liaghati added.

Dig Deeper on Threats and vulnerabilities