Getty Images
Deepfake technology risky but intriguing for enterprises
Enterprises can generate synthetic data sets with the technology. It is useful in broadcast and for advertising. However, its privacy and political implications can be dangerous.
A recent online ad from the Daily Voice, an internet news site based in Connecticut, called for newscasters. The ad seemed normal, but for one specific line: "We will use the captured video with your likeness to generate video clips for stories continuously into the future."
What the news site was promising to use is a variant of the AI technology known as deepfakes. Deepfakes are a type of AI that combines deep learning with fake or synthetic data or media -- visual or other information that is manufactured, not produced by real-world events -- to generate content.
While some consider deepfakes to be just synthetic data that enterprises can use to their advantage to train machine learning models, others see it as a dangerous tool that can sway political opinion and events, and harm not only consumers with fake and misleading images, but also organizations by eroding trust in authentic data.
Deepfakes as a useful tool
Enterprises must separate the bad from the good with deepfakes, said Rowan Curran, analyst at Forrester Research.
"It's important to disambiguate this idea of deepfakes as a tool that individuals are using to fake a speech by a politician from these useful enterprise [tools] for generating synthetic data sets for very useful and very scalable enterprise [products]," Curran said.
Rowan CurranAnalyst, Forrester Research
Enterprises can use deepfake technology to create synthetic data sets for training machine learning models.
Deepfake technology can be useful in simulated environments where machine learning models can be trained on situations that don't exist in the real world or are too private to use real data. These include applications in industries such as healthcare, for simulating or supplementing data sets, and broadcasting, where news outlets like the Daily Voice can generate the voices of popular podcasters or radio hosts in different languages.
The Daily Voice did not respond to requests for comment for this story.
Another application for deepfakes is enabling enterprises to get out their messages at scale. One vendor that develops this kind of technology is Hour One.
Hour One uses AI to generate videos of people who have given the company permission to use their likeness. The vendor has collected more than 100 characters or deepfakes that were based on real people. One of its customers, Alice Receptionist, uses the characters to manage virtual receptionists that greet and provide information to visitors and connect employees to visitors with video or audio calls.
Duping and scamming
The vendor protects its data and likenesses of its images from scammers and those who want to dupe others with the technology, said Natalie Monbiot, Hour One's head of strategy.
"The whole thing about duping and scamming is a systemic problem," Monbiot said, referring to the practice of hackers gaining access to consumers' social media profiles and organizations' sensitive data. "We understand that synthetic media could be another way duping and scamming can happen, but honestly, it's not necessary for the duping and scamming to happen in the first place."
Scamming and misleading consumers and enterprises can happen even without synthetic media and this type of technology, and Hour One has legal documentation in place to protect its characters, Monbiot said.
But with synthetic media and fast-advancing deepfake tools that enable just about anyone to create relatively high-quality fake images, it's easy for bad actors to sway public audiences for political purposes -- and for companies to pump up advertising in ways viewers can't detect.
"Misleading advertising has a long, proud heritage for American consumers," said Darin Stewart, analyst at Gartner. "This is going to amp that up on steroids."
Meanwhile, organizations have started to spring up to counter the threat of online deepfake technology.
The nonprofit Organization for Social Media Safety has sponsored anti-deepfake legislation in California. The proposed law defines deepfakes as recordings that falsely alter the original video in a way that makes the new recording seem real. The law prohibits both sexual and political deepfakes created without consent. The consent for political purposes it to ensure deepfake technology isn't used to alter the democratic voting process.
"Part of the issue here is staying in front of new technology that can have dangers, and we've done a poor job of that as a society," said Marc Berkman, CEO of the social media consumer protection organization. "And this is one example of it. So, getting in front of it, stopping people from being harmed before it really gets entrenched."
Duping and scamming using synthetic media like deepfakes not only affects consumers and political figures, but also afflicts enterprises.
One example that Stewart cited is an organization that was scammed four times. Scammers targeted different high-level executives who had frequent public appearances. They then used the voice recordings of the person to train a voice model. With the synthetic voice, they left a voicemail for a lower-level employee asking for transfer of a large amount of money, claiming they needed the money right away for a deal. The employee, glad that they were recognized by the high-level executive, did the transfer, and the scammers ended up with a sizable amount of money.
"Now that video deepfakes are becoming higher and higher quality, and less expensive to make, [this type of scam is] only going to expand," Stewart said.
Keeping the bad at bay
However, there are ways to limit damage from bad actors who seek to steal or mislead using deepfake technology, Stewart said. For example, a group of Berkeley University researchers has built an AI detection system that is trained to detect whether a video is a deepfake based on facial movements, tics and expressions.
But detection tools work after the damage is done, and scammers are already using those detection systems to train better deepfakes.
A technology chain of custody -- or a record of where the video or image came from, who created it, and where edits were made -- might be a better approach to uncovering deepfakes. Having such a record for videos and educating organizations about the certification process could help identify what is real and what is not.
However, most people and organizations are not willing to take the extra steps, according to Stewart.
"That's the biggest threat for deepfakes," he said. "A lot of people aren't going to put in the effort to determine whether something has been manipulated or faked. And a big chunk of our society won't care if it is."