Getty Images/iStockphoto
How hackers use AI and machine learning to target enterprises
AI benefits security teams and cybercriminals alike. Learn how hackers use AI and machine learning to target enterprises, and get tips on preventing AI-focused cyber attacks.
Cybersecurity has benefited from advancements in machine learning and AI. Security teams today are inundated with data about potential suspicious activities but often left looking for needles in haystacks. AI helps defenders find the real threats within this data via pattern recognition in network traffic, malware indicators and user behavioral trends.
Unfortunately, attackers have found their own ways to use these beneficial advancements in AI and machine learning against us. Easy access to cloud environments makes it simple to get started with AI and build powerful, capable learning models.
Let's look at how hackers use AI and machine learning to target enterprises, as well as ways to prevent AI-focused cyber attacks.
3 ways attackers use AI against defenders
1. Test their malware's success against AI-based tools
Attackers can use machine learning in several ways. The first -- and simplest -- is by building their own machine learning environments and modeling their own malware and attack practices to determine the types of events and behaviors looked for by defenders.
A sophisticated piece of malware, for example, might modify local system libraries and components, run processes in memory and communicate with one or more domains owned by an attacker's control infrastructure. All these activities in combination create a profile known as tactics, techniques and procedures (TTPs). Machine learning models can observe TTPs and use them to build detection capabilities.
By observing and predicting how TTPs are detected by security teams, adversaries can subtly and frequently modify indicators and behaviors to stay ahead of defenders who rely on AI-based tools to detect attacks.
2. Poison AI with inaccurate data
Attackers also use machine learning and AI to compromise environments by poisoning AI models with inaccurate data. Machine learning and AI models rely on correctly labeled data samples to build accurate and repeatable detection profiles. By introducing benign files that look similar to malware or by creating patterns of behavior that prove to be false positives, attackers can trick AI models into believing attack behaviors are not malicious. Attackers can also poison AI models by introducing malicious files that AI trainings have labeled as safe.
3. Map existing AI models
Attackers actively seek to map existing and developing AI models used by cybersecurity providers and operation teams. By learning how AI models function and what they do, adversaries can disrupt machine learning operations and models actively during their cycles. This can enable hackers to influence the model by tricking the system to favor the attackers and their tactics. It can also enable hackers to evade known models altogether by subtly modifying data to avoid detection based on recognized patterns.
How to defend against AI-focused attacks
Defending against AI-focused attacks is extremely difficult. Defenders must ensure the labels associated with data used in learning models and pattern development are accurate. By ensuring data has accurate label identifiers, it's likely the data sets used to train models will become smaller, which doesn't help with AI efficiency.
For those building AI security detection models, introducing adversarial techniques and tactics while modeling can help align pattern recognition with tactics seen in the wild. Researchers at Johns Hopkins University developed the TrojAI Software Framework to help generate AI models for Trojans and other malware patterns. MIT researchers released TextFooler, a tool that does the same for natural language patterns, which could be useful to build more resilient AI models that detect issues such as bank fraud.
As AI grows in importance, attackers will seek to outpace defenders' efforts with their own research. It is crucial for security teams to stay up to date with attackers' tactics to defend against them.