metamorworks - stock.adobe.com

3 types of deepfake detection technology and how they work

Think you're talking to your boss on Zoom? You might want to think again. Deepfake technology has already cost enterprises millions of dollars. Here's how to fight fire with fire.

The best deepfakes are becoming increasingly difficult -- if not impossible -- to recognize as inauthentic with the naked eye and ear. Consider, for example, the finance employee at a multinational firm who transferred $25 million into scammers' accounts after attending a video conference with -- he believed -- the company's CFO and other colleagues. All were, in fact, deepfakes.

As generative AI technology advances, deepfakes are becoming more sophisticated and easier, faster and cheaper to make. Cybercriminals can use them to fool biometric authentication and authorization mechanisms and dupe enterprise users across channels, opening the door to financial losses, data breaches and compliance issues.

In the spirit of fighting fire with fire, experts say organizations should consider how deepfake detection technology helps combat AI-based social engineering and fraud.

Deepfake detection technologies

A report from Forrester cited the following key types of deepfake detection technology:

1. Spectral artifact analysis

Given how AI algorithms generate content, even the most sophisticated deepfakes have the following tell-tale characteristics:

  • Repeated patterns. Algorithms are prone to perfect repetition, while humans are largely incapable of it. For example, a deepfake subject might repeatedly make the same gestures and sounds with identical wavelengths and frequencies. It might also repeatedly appear in the same position and proximity relative to a static object, such as a microphone. An authentic video or audio sample featuring a human subject has far more natural variation and fluctuation at the signal level.
  • Unnatural artifacts. Deepfake algorithms commonly produce voice-like sounds at pitches and with pitch transitions that are impossible for human voices to create.

In a high-quality deepfake, such inconsistencies might be virtually invisible to the typical human user. Deepfake detection technology, however, uses spectral artifact analysis to uncover suspicious data artifacts.

2. Liveness detection

AI-based liveness detection algorithms aim to confirm the presence or absence of a human in a digital interaction by looking for oddities in a subject's movements and background. According to Forrester, the technology typically uses a 2D image to generate a 3D model, which serves as a reference point of authenticity.

For example, liveness detection in a banking app's biometric authentication tool might prompt users to complete a series of challenges, such as blinking, smiling and turning their heads side to side on demand. A subject's appearance throughout the interaction should be consistent with the tool's internal 3D reference model.

3. Behavioral analysis

Context-based behavioral analysis is also helpful in deepfake detection. In authentic video and audio interactions, the following should be consistent with normal user behavior:

  • How the user moves a mouse.
  • How the user types on a keyboard.
  • How the user interacts with a device.
  • How the user navigates an application.
  • Device ID.
  • Device geolocation.
  • The frequency with which the user's image has appeared in previous transactions.

Path protection

In some cases, audio and image processing software development kits (SDKs) can detect when the digital signatures of camera and microphone device drivers have changed, flagging direct injection of deepfake content into the acquisition signal path. According to Forrester, however, injection isn't always detectable, depending on the attack.

The question is, how do you wrap a third-party deepfake detection algorithm around your legacy biometric authentication solution.
Andras CserAnalyst, Forrester

In another path protection method, some identity and authentication vendors use capture SDKs to stamp the capture stream with complex watermarks, which are then used for server-side authentication.

Deepfake detection challenges

Forrester analyst Andras Cser told Informa TechTarget he expects to see major advancements in detection technology in the future, with defensive AI algorithms becoming reliably deepfake-proof. "But I don't think we're quite there yet," he cautioned.

Major challenges include technical integrations and person-to-person interactions.

Technical integrations

Vendors are aggressively developing deepfake detection technology, with several already introducing powerful defensive algorithms. But, according to Cser, integrating deepfake detection capabilities into existing workflows and toolchains poses a major technical challenge.

"The question is, how do you wrap a third-party deepfake detection algorithm around your legacy biometric authentication solution?" he said.

Successful integration requires inserting detection technology into the video, image and audio capture path. The resulting feedback must then trigger proportional in-application outcomes, such as authentication failures, on-screen warnings and terminations of high-risk interactions.

Person-to-person interactions

While deepfake detection technology best supports routine, predictable, transaction-based interactions, such as biometric authentication, ad hoc person-to-person exchanges remain particularly vulnerable to fraud.

"It's always the human factor that's the biggest challenge," Cser said. "When someone calls you on your mobile phone using a deepfake, that is something for which there are not a ton of technological defenses."

With this in mind, security awareness training for enterprise users remains key, even as deepfake detection technology becomes more sophisticated.

Alissa Irei is senior site editor of Informa TechTarget's SearchSecurity site.

Dig Deeper on Threat detection and response