metamorworks - stock.adobe.com
Intel deepfake detector raises questions
The vendor created a detector that uses facial blood flow to determine if a video subject is fake or real. However, its effectiveness to detect a real or fake video is unclear.
Semiconductor chip manufacturer Intel is targeting deepfakes.
As part of its Responsible AI initiative, the vendor on Monday introduced a deepfake detector, FakeCatcher.
Deepfakes are fake images, videos and audios of real people made with AI technology. The hardware and software vendor's new product comes as deepfakes become easier to create, more widespread and harder to detect. Fraudsters use deepfakes to spread political disinformation, gain access to sensitive information and harm companies' reputations.
FakeCatcher uses Intel's OpenVINO -- an open source deep learning toolkit -- to run AI models for face and landmark detection algorithms. The technology uses "subtle blood flow" in the pixels of a video to determine what video is real or fake, according to Intel.
The deepfake detector collects blood flow signals and translates them into spatiotemporal maps. These then use deep learning to determine what is real or fake.
Many vendors are starting to create detection tools not only in the video and audio market but also in the text generation arena, said Rowan Curran, an analyst at Forrester Research.
"This is just the beginning what's going to be an ongoing interaction between positive and negative actors," he said.
Although Intel claims the technology is mostly accurate, there is still a question of whether the technology will detect a deepfake video before bad actors accomplish their goal.
Conditions for accuracy
"If FakeCatcher works as advertised, it is a significant step forward," said Darin Stewart, an analyst at Gartner. "To date, most detectors have been reactive, analyzing media after the fact. Even when successful, it is usually too late."
Darin StewartAnalyst, Gartner
One of the challenges for FakeCatcher is its use of facial blood flow, Stewart said.
"This is a clever approach but makes me wonder how dependent the detector is on visual alignment," he said. It's unclear what type of conditions the technology needs to work effectively.
For example, it is not clear if the technology will work if the subject's face is in profile or if the subject needs to be fully visible. It's also unproven if lighting matters and whether the technology works with people of all racial and ethnic backgrounds, Curran said.
"That is obviously been an issue that we have seen in other types of applications, as they are tailored for Caucasian people. And often times they're even tailored for Caucasian [men]," he said.
Based on test done across different genders and races from a sample data set, there isn't a significant difference between genders or skin colors, Intel said in a statement to TechTarget.
The vendor also tested the technology with deepfakes "in-the-wild," with deepfakes that contained different artifacts, including varied movement, resolution and compression. The technology proved to be mostly accurate, Intel said.
Another challenge for FakeCatcher is adoption by social media companies, news organizations and the public.
"Many of those organizations, particularly social media, have been very reluctant to take substantive steps to combat disinformation, as doing so runs counter to their business model," Stewart said. It's also hard to tell that, whether the public will believe the technology or think FakeCatcher is the "fake news" if FakeCatcher labels a video as fake.
Editor's note: This story has been updated after initial publication