Getty Images Plus

OpenAI details how threat actors are abusing ChatGPT

While threat actors are using generative AI tools like ChatGPT to run election influence operations and develop malware, OpenAI says the efforts are rarely successful.


Listen to this article. This audio was generated by AI.

Nation-state actors are using ChatGPT for threat activities such as malware debugging, according to a Wednesday report by OpenAI.

The report, titled "Influence and Cyber Operations: An Update," analyzed how threat actors have utilized OpenAI models, primarily ChatGPT, to conduct threat activity, as well as -- when relevant -- how OpenAI disrupted said activity. The report claimed that since the start of 2024, OpenAI disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models."

Threat actor use cases listed included the established and expected -- such as generating spear phishing emails -- to more innovative ones. For example, the Iranian threat actor known as Storm-0817 used OpenAI models to assist with developing and debugging "rudimentary" Android malware alongside corresponding command and control infrastructure. Storm-0817 also used OpenAI for assistance creating an Instagram scraper and translating LinkedIn profiles into Persian.

OpenAI also cited abuse activity from a suspected China-based threat actor known as "SweetSpecter," which made unsuccessful phishing attacks against the company itself. SweetSpecter attempted to use ChatGPT to debug code for a cybersecurity tool extension as well as a framework for sending malicious text messages.

The report documented activity from a third group known as "CyberAv3ngers," which is associated with the Iranian Islamic Revolutionary Guard Corps and was responsible for attacks on water utilities last year. CyberAv3ngers used ChatGPT for debugging activities, vulnerability research and scripting advice, as well as more specific queries like listing industrial protocols and ports that connect to the public internet.

Although the use of generative AI in malware development is nothing new, OpenAI's Wednesday report described how threat groups are taking advantage of tools like ChatGPT to supplement their tactics, techniques and procedures.

ChatGPT has also been used in influence operations. Wednesday's research focused heavily on election threats and referenced a separate OpenAI report from August, which described how an Iranian influence operation dubbed Storm-2035 used OpenAI's popular tool to create articles for social media and websites about both sides in the upcoming U.S. presidential election, the conflict in Gaza and other political topics.

"They interspersed their political content with comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following," OpenAI said.

However, it appears that threat actors have had mixed success with generative AI tools to date. The August report referenced a "lack of meaningful audience engagement" stemming from the influence operation, but OpenAI's October research addressed this trend further.

"Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," OpenAI said in the new report. "This is consistent with our assessment of the capabilities of GPT-4o, which we have not seen as materially advancing real-world vulnerability exploitation capabilities as laid out in our Preparedness Framework."

The latest report referenced other political influence campaigns generating low-engagement text and image AI content, including Russian- and Turkish-language accounts. In all aforementioned cases, as well as others mentioned, OpenAI disrupted these campaigns by, at a minimum, banning the accounts.

One exception to low-engagement AI posting involved a Russian-speaking user on X, formerly Twitter, who, in arguing with another user about former U.S. president and current presidential candidate Donald Trump, appeared to publish an automated "AI-generated" post that said the user in question ran out of ChatGPT credits. This post did reach viral levels of engagement, OpenAI said, but it was also apparently manually written in order to get attention on social media and was not actual AI-generated content nor reflective of running out of ChatGPT credits.

TechTarget Editorial contacted OpenAI for additional comment, but the company had not responded at press time.

Alexander Culafi is a senior information security news writer and podcast host for TechTarget Editorial.

Dig Deeper on Data security and privacy