Getty Images/iStockphoto
FBI: Criminals using AI to commit fraud 'on a larger scale'
As AI technology becomes more widely adopted, attackers are abusing it for their scams, which the FBI says are becoming increasingly more difficult to detect.
The FBI warned that threat actors are increasingly leveraging a variety of AI tools in financial fraud schemes.
The law enforcement agency issued a public service announcement on Tuesday that detailed several ways attackers are using generative AI tools to trick victims into sending money or clicking on malicious links. The PSA included examples of how threat actors are using the AI tools and recommendations for how to protect against scams that involve social engineering and spear phishing.
In addition to threat actors using AI-generated text, the FBI also observed scams that involved AI-generated photos, AI-generated videos and AI-generated audio. It urged individuals to be aware of the warning signs that are becoming significantly harder to detect.
"The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes," the FBI wrote in the PSA.
The PSA outlined several examples of how threat actors are abusing AI tools. The FBI said attackers use AI-generated text to create fake social media profiles and to generate content for fraudulent websites and phishing emails. Threat actors also use AI-generated images to bolster social media profiles or produce photos to share with victims in private communications to trick targets into believing they are speaking with a real person.
"Criminals generate fraudulent identification documents, such as fake driver's licenses or credentials (law enforcement, government, or banking) for identity fraud and impersonation schemes," the PSA said.
Threat actors also manipulate individuals through vocal cloning, the PSA warned. For example, attackers can use AI-generated audio to impersonate well-known public figures or those close to the targeted victim. If successful, the FBI warned, attackers could gain access to bank accounts or elicit fraudulent financial transactions.
Another tactic the FBI warned of is the use of AI-generated videos. The PSA highlighted many ways threat actors leverage AI video tools, including use in video chats and fake promotional materials for investment fraud schemes. "Criminals generate videos for real time video chats with alleged company executives, law enforcement, or other authority figures," the PSA said.
As scams become more realistic, the FBI recommended that individuals develop a secret word or phrase with family members to verify their identities, and limit sharing online image and voice content. The PSA also urged individuals to look for "subtle imperfections" in suspicious images and videos, and to listen closely to tone and word choice to detect potential voice cloning.
Recent threat activity shows how successful and dangerous AI-powered scams have become, and how they are often deployed from lower-level cybercriminals to nation-state actors. For example, earlier this year, a North Korean threat actor posed as an IT worker on KnowBe4's AI team to gain access to sensitive information at the company. An investigation revealed that the actor used a deepfake video to obtain the job, which required four video conferences. The fake worker also used AI-generated images on his resume.
Additionally, earlier this year, Microsoft revealed that Russian advanced persistent threat actors, such as Storm-1516, made fake videos during misinformation campaigns against the U.S. 2024 election.
Arielle Waldman is a news writer for TechTarget Editorial covering enterprise security.