
sdecoret - stock.adobe.com
Microsoft targets AI deepfake cybercrime network in lawsuit
Microsoft alleges that defendants used stolen Azure OpenAI API keys and special software to bypass content guardrails and generate illicit AI deepfakes for payment.
Microsoft named four individuals in a lawsuit targeting a global cybercrime network allegedly generating illicit AI deepfakes of celebrities.
On Jan. 10, Microsoft's Digital Crimes Unit (DCU) announced in a blog post that it was taking legal action against cybercriminals who, the company said, "intentionally develop tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft's, to create offensive and harmful content."
Specifically, Microsoft filed a lawsuit in December targeting Storm-2139, a cybercrime network it said was abusing AI-generated services, bypassing guardrails, and offering the tools to end users for varying tiers of service and payment. End users would then use the bypassed products "to generate violating synthetic content, often centered around celebrities and sexual imagery," also known as deepfakes.
As a result of this legal action, Microsoft said in a new blog post Thursday, the company obtained a temporary restraining order and preliminary injunction enabling it to seize a website instrumental to the group, "effectively disrupting the group's ability to operationalize their services." This disruption appeared to have panicked members of the group.
"The seizure of this website and subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another," said Steven Masada, assistant general counsel for Microsoft DCU, in the blog. "We observed chatter about the lawsuit on the group's monitored communication channels, speculating on the identities of the 'John Does' and potential consequences."
Masada continued, "As a result, Microsoft's counsel received a variety of emails, including several from suspected members of Storm-2139 attempting to cast blame on other members of the operation." The blog post includes screenshots of alleged Storm-2139 members reporting other alleged members of the group via emails.
In the complaint, which was amended Thursday, Microsoft named four individuals: Arian Yadegarnia of Iran, Alan Krysiak of the United Kingdom, Ricky Yuen of Hong Kong and Phát Phùng Tấn of Vietnam.
Microsoft alleged that the group was misusing the company's Azure OpenAI service guardrails using stolen Azure OpenAI API keys -- which Microsoft discovered in late July 2024 -- in tandem with software the defendants created named de3u. De3u lets users issue API calls to generate Dall-E model images.
"Defendants' de3u application communicates with Azure computers using undocumented Microsoft network APIs to send requests designed to mimic legitimate Azure OpenAPI Service API requests," the complaint read. "These requests are authenticated using stolen API keys and other authenticating information. Defendants' de3u software permits users to circumvent technological controls that prevent alteration of certain Azure OpenAPI Service API request parameters."
Microsoft asked the Eastern District of Virginia court to declare the defendants' actions as willful and malicious, to secure and isolate the infrastructure of the website, and to award damages to Microsoft in an amount to be determined at trial.
In an email, a Microsoft spokesperson told Informa TechTarget that as part of its ongoing efforts to minimize the risks of AI technology misuse, its teams are continuing to work on guardrails and safety systems in line with its responsible AI principles, such as content filtering and operational monitoring. The spokesperson also shared links to various Microsoft security blogs, including a post published last April about how the company discovers and mitigates attacks against AI guardrails.
Alexander Culafi is a senior information security news writer and podcast host for Informa TechTarget.