Explaining AI's impact on ransomware attacks and security
Experts say AI and LLMs will make ransomware attacks more dangerous, but those same tools could be turned against criminals by bolstering ransomware protections.
It's no secret that cybercriminals use artificial intelligence and large language models to raise their ransomware game. AI and LLMs can aid the crafting of more convincing phishing emails, enable ransomware to more easily bypass security defenses and avoid detection, and help target victims more effectively.
The AI and ransomware story is not all doom and gloom, however. AI provides a powerful assist to ransomware defense tools and best practices. With AI, detection software can more quickly and accurately identify ransomware attacks. AI can also accelerate mitigation and recovery efforts. Combined with threat intelligence data, AI can help security teams keep pace with emerging ransomware threats or shifts in tactics.
How AI makes ransomware attacks more dangerous
People might assume that cybercriminals use AI only to craft phishing email messages that better persuade victims to click on malicious links. While that is a common use, AI enhances the chances of success for ransomware attacks at all levels. The following are some key ways that cybercriminals use AI with ransomware attacks:
Research and reconnaissance. The more attackers know about potential victims, the better chance they have for a successful attack. Attackers use AI to identify victims, locate critical assets and assess vulnerabilities much more quickly and accurately than when that work is done manually. "[Reconnaissance is the] first thing attackers do," said Mark Lynd, head of executive advisory and strategy at Netsync, an IT and security consulting company and MSP. "They scan networks for vulnerabilities, misconfigurations, unpacked systems. It gives them a roadmap to get in and attack your organization. Once they get in, they can use an AI-driven bot to automate privilege escalation and spread ransomware laterally through the organization."
Targeting. Once a victim organization is identified, ransomware attackers want to know which individuals they should target with a phishing email or other means designed to enable system access. AI can speed the process of identifying people with access to important data and their relationships with other key individuals. Social media and corporate websites are key sources of targeting data for AI to use. LinkedIn, for example, provides data from which AI can identify targets and their relationships within an organization, Lynd said.
Customization. Attackers can then use AI to customize messages sent to the targeted victims. AI can gather and analyze public information from the organization's website, social media or news stories to create more relevant and timely messages used in social engineering or phishing efforts. "Because AI uses natural language processing, [an AI-written email] can bypass the traditional spam filters," Lynd said.
Automation. AI can automate not only the above tasks but also those an attacker must execute after successfully infecting a system. This includes the identification of data to encrypt or corrupt, countermeasures against a victim's mitigation attempts and the communication of demands.
AI can help attackers scale and automate the creation of phishing emails, said Mayur Rele, senior director for IT and information security at Parachute Health. For example, attackers can use AI and LLMs to craft highly convincing and personalized phishing emails that trick users into downloading malware or providing credentials. "These emails often lack the telltale signs of traditional phishing, like poor grammar or generic content," Rele said.
AI algorithms can autonomously scan networks or systems for vulnerabilities at a speed and scale unattainable by humans and then exploit them to deploy malicious software. This reduces the time from reconnaissance to infection, often catching defenders off guard. "You don't need any special skills to use that," Rele said. "The tools are open in the market. Anyone can register and use them. There are a lot of AI tools on the dark web that could be similar to what OpenAI, Grok and Gemini could be offering."
Continuous learning. AI enables attackers to more quickly adapt to defenses and mitigation and recovery attempts. This helps them to avoid detection more easily and better ensure that ransoms are paid.
How AI could improve ransomware defense
All the benefits of AI that cybercriminals employ during ransomware campaigns are available to defenders, too. AI simply takes the cat-and-mouse game to another level. The technology helps defenders keep pace with -- and often defeat -- ransomware attacks through several methods, including the following:
Behavioral analysis. AI helps security tools analyze user and system behaviors to more quickly and accurately detect anomalies that might indicate a ransomware attack. A suspicious anomaly might be a new network traffic pattern or access from an unknown IP address or unusual location. "Most of the ransomware attacks come from unidentified networks," said Rele, who has co-authored a paper on using AI to detect ransomware.
Modern data platforms, such as Cohesive, Rubrik and Veeam, connect with extended or managed detection and response tools. These XDR/MDR products monitor and quarantine anything that looks suspicious. The top XDR and MDR products use AI to perform behavioral analytics to identify threats that might evade traditional signature-based detection. Because they are integrated with data platforms, the AI-enhanced detection and response tools help keep ransomware from corrupting data files and backups.
Response and recovery automation. No organization will have enough skilled human security staff to identify and investigate every potential threat, Lynd said. The number, sophistication and rapid evolution of ransomware and other threats is growing too quickly. Using AI to automate threat detection and response is necessary to keep the workload manageable for the human staff. AI features found in data platforms, MDR/XDR products, firewalls and other defenses all play a role in automating response and recovery.
Endpoint protection via AI agents. Lynd also noted the importance of AI agents deployed at endpoints to more quickly detect and respond to ransomware and other threats. "IT and security folks don't control the endpoints," he said. "If [endpoints are] sitting in Starbucks, now you're trying to protect the people on the network of Starbucks, which can be anybody. You've increased your attack surface." An AI agent could analyze patterns at the endpoint to identify a phishing attempt.
How to prevent AI-powered ransomware attacks
The basics of ransomware prevention -- such as employee training, security controls and processes, response plans and data backups -- still apply in the world of AI. The enhanced threat that ransomware presents thanks to AI will require some tweaking of those basics. The following are some specific actions to consider.
Update employee training. All users within the organization should understand what to look for. "Every company is launching AI, so you need to train the users first," Rele said. He suggested selecting a good training program and then running simulation tests. "Not everyone is going to absorb what's in the training. You need to run some simulations, like phishing drills."
Rele recommends training tech staff as well on skills such as secure coding in an AI environment. As for training security teams, "A lot of the tools have AI capabilities now, but that doesn't mean that you need somebody on staff who's an AI expert," he said. "All you need is a good software engineer who can be trained on AI."
Deploy AI-enhanced security tools at the network and endpoint levels. Having these tools in place is necessary to counter the speed at which an AI-enhanced ransomware attack can occur. Tools such as Darktrace or ExtraHop can automatically shut down a host if it sees files being encrypted and then restore them if needed. "You need some kind of a program that can do automated restore," Rele said. "Then, if you have an endpoint detection installed on your laptop, [the tool] will see that you're trying to do a malicious activity and it will cut the connection, give you an error and put the logs on screen. Then, your IT and security teams can look into the logs and figure out whether it's genuine."
Rele views AI tools as crucially important. "The problem is that people don't want to invest in the right tools. Companies that have low budgets for AI tools or security tools -- that's where they're getting exploited. And then attackers know that."
Create a baseline for network activity. Before AI can detect anomalous behavior on an organization's network, it needs to know what's normal traffic. Rele cited the example of an e-commerce business: AI knows from which states the order traffic is likely to originate; if that pattern deviates, the AI can send an alert to check whether that activity is genuine.
AI can speed the process of identifying people with access to important data and their relationships with other key individuals.
Monitor and limit publicly accessible data. Ransomware gangs can use AI to quickly gather and analyze information available online about a target organization. This helps them identify whom to target as well as the IT products used on a network and their vulnerabilities. "If a manufacturer has a use case about an organization and it mentions specific security products, networking products (Cisco, Fortinet, CrowdStrike), AI is amazing at collecting that information," Lynd said. With a simple prompt using an organization's name, AI pulls that information. "Once you have that, you can look at the CVE [list], which shows the vulnerability and exploits, and now you know what ports to look at. It's great at determining what organizations are vulnerable, where they're vulnerable and if they're low-hanging fruit."
Have a tested incident response plan. Getting hit with an AI-enhanced ransomware attack means less time to respond and recover. If an incident response plan does not take this into account, then the impact of the attack will be far worse. "It's unbelievable how few organizations actually have a tested incident response plan," Lynd said. "Some of the nastier versions of ransomware can encrypt 55,000 files a minute. If you don't catch those early indicators of compromise, it's already [a] business continuity [issue]. If you're not careful, you end up in disaster recovery."
Conduct tabletop exercises for AI-enhanced ransomware. Lynd recommends instant response tabletop exercises for hypothetical AI-enhanced ransomware attacks. This will help key defenders and other stakeholders better understand the threat and enables an organization to test and fine-tune its incident response plan.
Emerging threats from AI-enhanced ransomware
While ransomware groups are lagging behind state-sponsored advanced persistent threat groups in AI use, it's only a matter of time before they catch up. "We've seen groups like Indrik Spider and Scattered Spider using [AI] to conduct research or to leverage generative AI -- LLMs specifically -- to help create scripts for things like pulling data out of Entra ID or writing PowerShell scripts and things along those lines," said Adam Meyers, senior vice president of counter adversary operations at CrowdStrike.
Ransomware researchers see several AI-related threats, including the following.
Antimalware evasion. Since a lot of security software relies on malware signatures for detection, attackers are beginning to use AI to make malware self-modifying, rendering those signatures obsolete. "We're starting to see malware that can be polymorphic," Lynd said. "If [ransomware is] polymorphic, it can actually trick [security software] by learning what the tools are looking for and then modifying their payload to avoid detection."
AI ransomware as a service. Many ransomware groups sell their services and tools to other cybercriminals. It's only logical for some of them to enhance their offerings with AI. "You can go out with 25 grand, start your own ransomware as a service," Lynd said. "More and more of these are AI-driven. They even have AI-driven chatbots that can handle the negotiations."
Ransomware that targets cloud AI models. Ransomware typically targets key data files and systems to make them unavailable to the victim. Similarly, an attack on an organization's cloud-based AI models could render those useless; criminals would then be in a position to extort a fee for their return. "In the cloud domain, we've seen that threat actors gain access to the cloud," Meyers said. "Inside of some of these cloud control planes are foundational AI models. We started seeing threat actors going after the models inside of that cloud. If you have access to a major cloud provider that provides foundational models, you can leverage compromised cloud access to misuse or abuse their GenAI. What we used to see a couple of years ago was that a cloud attack was typically to spin up a virtual machine inside of the cloud and then to use it for cryptocurrency mining. I think we'll start to see threat actors gaining cloud access and then using that to broker access to other threat actors to that foundational model."
Dynamic ransom adjustment. Lynd has seen reports of ransomware groups using AI to assess a target's ability and likelihood to pay a ransom. This includes determining whether they have a cryptocurrency account or cyber insurance. For example, AI might help an attacker learn that an organization doesn't have cyber insurance or a viable backup; in those situations, cybercriminals could demand higher ransoms.
Michael Nadeau is an award-winning journalist and editor who covers IT and energy tech. He has held senior positions at CSO Online, BYTE magazine, SAP Experts/SAP Insider and 80 Micro.