Getty Images

HC3 Warns Healthcare of AI’s Use in Malware Development

In its latest brief, HC3 details how artificial intelligence (AI) can be used by threat actors to aid in malware development, forming a serious threat to healthcare cybersecurity.

Artificial intelligence (AI) tools play an increasingly important role in cybersecurity. AI models can be leveraged to defend the healthcare sector against cyber threats. On the other hand, AI’s capabilities could also be used for malware development to harm the healthcare sector, a Health Sector Cybersecurity Coordination Center’s (HC3) analyst note suggested.

“Artificial Intelligence (AI) has most notably been applied to the defensive side of cybersecurity. It has been used to detect threats, vulnerabilities, and active attacks and to automate security tasks,” the analyst note stated.

“However, because of its known defensive use and because threat actors in cyberspace are known to be highly creative and well-resourced, concerns have been raised in the cybersecurity community about the potential for artificial intelligence to be used for the development of malware.

AI works 24 hours per day, seven days per week. The constant monitoring means that the tools can quickly process large quantities of data and detect threats, making it an ideal cybersecurity measure for healthcare facilities.

Research has shown that AI-driven programs can collect valuable data to help cybersecurity experts garner new insights about cybercriminal behavior.

For example, HC3 pointed out that “DeepLocker was developed to better understand how artificial intelligence models could be combined with existing malware techniques to create more potent attacks. In the case of DeepLocker, it analyzes the payload distribution lifecycle based on a deep neural network (DNN) AI model to look for appropriate ‘trigger conditions’ in order to reach the intended target.”

However, AI is now evolving and can be leveraged by threat actors to further the development of malware and phishing schemes.

“While the use of AI is still very limited and requires a sophisticated user to make it effective, once this technology becomes more user-friendly, there will be a major paradigm shift in the development of malware,” HC3 stated. “One of the key factors making AI particularly dangerous for the healthcare sector is the ability of a threat actor to use AI to easily and quickly customize attacks against the healthcare sector.”

AI tools such as ChatGPT, based on its GPT-3.5 language model, raised some significant concerns shortly after it was released. The platform was created to interact with humans and respond to their conversations and requests.

These capabilities piqued many interests as the platform gained a million users in the first six days of its launch.

Due to the accessibility and capabilities of the tool offered, prominent members of the cybersecurity community investigated how artificial intelligence might be used to develop malware.

Following this investigation, the team found it could craft credible phishing e-mails.

Additionally, in less than two months of the release of ChatGPTAttempts, the tool was used for malware development purposes.

Specifically, On December 21, 2022, a threat actor posted a Python-based multi-layer encryption/decryption script on an underground hacking forum. The script could potentially be used as ransomware, the analyst note suggested,

“Current artificial intelligence technologies are widely believed to only be at the very beginning of what will likely be a whole array of capabilities that will cut across industries and enter into people’s private lives,” HC3 wrote.

“The cybersecurity community is far from developing mitigations and defenses for such malicious code, and it remains unclear if there will ever be ways to specifically prevent AI-generated malware from being successfully used in attacks. There are already debates and discussions on the ethical use of AI systems and the proper governing models that should be deployed to ensure they are confined appropriately.”

HC3 suggested that healthcare organizations use existing resources that explore the negative side of AI to protect their infrastructure as these threats become more prevalent.

Next Steps

Dig Deeper on Cybersecurity strategies

xtelligent Health IT and EHR
xtelligent Healthtech Analytics
xtelligent Virtual Healthcare
Close