Getty Images

AI vendors tackle generative AI attacks in 2024 election

Some AI vendors have tried to prevent bad actors from using their models and platforms against candidates. Others are reactive in trying to stop or disrupt misinformation.


Listen to this article. This audio was generated by AI.

With the U.S. election less than three weeks away, tech vendors are working to disrupt and stop the spread of AI-generated misinformation and disinformation.

The popularization of generative AI means that misinformation and disinformation content are proliferating rapidly.

To curb it, tech vendors are taking two approaches.

"Model developers ... try to have prevention mechanisms," said Gang Wang, associate professor of computer science at the University of Illinois' Grainger College of Engineering. "They are not just preventing, but sometimes, when certain things happen, [they] do post-analysis to say, 'OK, let's try to understand what these malicious actors are doing.'"

The prevention method

Earlier this year, Google and Microsoft disabled their AI chatbots, Gemini and Copilot, from answering election-based questions. OpenAI's popular GPT generative AI systems, on the other hand, responds to questions about elections.

Google faced criticism for not allowing Gemini to answer questions about the failed assassination attempts against Republican presidential nominee Donald Trump.

AI technology vendors also started to label AI-generated content.

In February, Meta unveiled plans to label AI-generated images on Facebook, Instagram and Threads. The social media giant also started putting "Made with AI" labels on AI-generated videos, images and audio posts on all its platforms.

Dissecting and analyzing

AI vendors also take a second approach to reduce the spread of misinformation and disinformation using AI-generated content: analysis.

On Oct. 9, OpenAI released a report detailing ways it caught and disrupted the use of its models for creating and spreading misinformation and disinformation in U.S. and overseas elections.

For example, a group known as Storm-2035 used ChatGPT to generate long-form articles referencing the U.S. presidential and vice presidential candidates.

Storm-2035 also used OpenAI's models to generate comments on X, formerly Twitter, and Instagram about Scottish independence and the U.S. presidential election.

Storm-2035 was also mentioned in a report released by Microsoft in August about an Iranian group targeting the U.S. election. OpenAI found that the group was posting comments on X and Instagram.

In addition, Meta linked Storm-2035 to a 2021 Iranian campaign that targeted voters in Scotland.

Another troublemaker attempted to use OpenAI's new GPT-4o model to generate content in support of Republican presidential candidate Donald Trump. The incident unraveled in June, when X users noticed an account that had previously posted replies to others in English began sharing text in Russian. An investigation uncovered that the initial posts from the account were created using OpenAI's models; on June 18, the account displayed a JSON error message after running out of GPT-4o credits.

The analysis approach puts AI vendors like OpenAI in a position of being intermediaries between the attackers and social media platform providers, since the vendors are able to see how their models are used and link that to activities on social media, Wang said.

Disinformation and misinformation accounts are using AI models to create content
As the U.S. election nears, more bad actors are using generative AI to spread disinformation.

Unable to catch up

Neither prevention nor analysis works to eliminate or fully prevent the spread of misinformation and disinformation, Wang said.

"We still see that malicious actors can still use models to generate undesirable content," he said. "There's a way to jailbreak those preventions."

Even as mediators, AI model providers are unable to catch up.

"AI-generated content is a propagandist's dream come true," said RPA2AI CEO Kashyap Kompella. "Technology platforms and social media companies are trying hard but struggling to keep up."

The authors of the OpenAI report, Ben Nimmo and Michael Flossman, confirmed this when they wrote that "the unique insights that AI companies have into threat actors can help to strengthen the defenses of the broader information ecosystem, but cannot replace them."

The open source challenge

These open source models are more difficult to monitor their use cases.
Gang WangAssociate professor of computer science, University of Illinois' Grainger College of Engineering

Part of the challenge is that while proprietary models like GPT-4o can be closely monitored, the same is not true for open models.

"With these open source models, it is more difficult to monitor their use cases," Wang said. "Anyone can download, anyone can use it to generate content."

Although users of open source models must have computers with powerful GPUs to run the models, that is not a problem for a lot of the malicious actors, especially state-sponsored ones, Wang added.

"The lack of traceability is a problem," he said. "When the content is generated, it's really difficult to differentiate it from genuine content."

There's also hybrid editing -- content created by collaboration between human and open source model generators -- that makes incidents of misinformation and disinformation even more difficult to detect.

While watermarking and digital fingerprinting serve to deter some misinformation perpetrators, those techniques are not mature enough to be effective, Wang said.

Moreover, watermarking scratches only the surface of what needs to be done to defeat bad actors misusing generative AI models, said Rahul Sood, chief product officer at Pindrop, an AI-authentication and fraud detection vendor. Sood wrote a blog post about an AI-generated deepfake video of Democratic presidential candidate Kamala Harris that circulated widely in July.

"There's a little bit of AI safety whitewashing that is happening by companies claiming watermarking is enough," Sood said. "It is a good first step. We should not assume it is sufficient."

And guardrails like watermarking are at the discretion of the AI vendors, he noted.

"If you're an open source platform, there is no motivation, no incentive, no requirement for you to have these guardrails," Sood continued.

Other challenges

Another problem is that each type of AI-generated content -- text, audio or video -- requires unique angles or responses, said Alon Yamin, co-founder and CEO of Copyleaks, an AI-based text analysis platform.

"It's important to be aware and look for [responses] that are relevant for the specific content," Yamin said.

Even as AI vendors tackle each unique instance of misinformation and disinformation, they can't do it alone.

"Misinformation tactics evolve rapidly, meaning tech vendors must continuously update and refine their AI tools," said Thyaga Vasudevan, executive vice president of product at Skyhigh Security, an IT security vendor. "It's a cat-and-mouse game where bad actors find new ways to manipulate or bypass AI safeguards."

Without adequate security measures, bad actors often can use generative AI to spread misinformation by identifying and manipulating voter preferences, targeting specific voter demographics and generating realistic deepfakes, Vasudevan added.

The lack of regulation also makes battling these tactics challenging, he said.

"AI models alone can't address misinformation without consistent collaboration with governments, social media platforms and civic groups to ensure timely responses," Vasudevan said.

Social media providers also need to focus on collaborating with detection platforms in the same way industries like finance have focused on detection of deepfakes, Sood said.

"They realize it is very important for their consumer's trust," he said. "Social media platforms have not made the same decision."

Clear regulations could boost the use of detection technology on social media platforms.

For example, the Federal Communications Commission earlier this year proposed new rules for AI-generated robocalls.

"There has to be efforts like this that apply not just to the telecom companies, but also to social media platform companies," Sood continued.

However, with the U.S. election so close and other global elections having already occurred, regulation will likely have little impact in 2024.

The work of detection

What could be effective are efforts being made by small organizations and activist groups.

One of these is TrueMedia.org, a nonprofit, nonpartisan group that identifies political deepfakes. Users can paste in a social media URL and the group's website detects whether the content is real. Truemedia.org uses Pindrop's audio deepfake detection technology.

DeepBrain AI is another vendor that creates AI avatars and converts text to video with generative AI. Since its system generates avatars, it is sometimes confused as a deepfake system.

With the spread of deepfakes in elections, DeepBrain created a deepfake detector that monitors content from YouTube, TikTok, Reddit and other social platforms.

"It is impossible to prevent deepfake content from being released," said John Son, global marketing manager at DeepBrain. "It's the aftermath. We see an issue, and then we try to minimize the harm."

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI business strategies