Getty Images/iStockphoto

Meta will label AI-generated images on social platforms

The social media giant's decision to label AI-generated content is a good first step, some observers say, but does not eradicate the problem of disinformation in an election year.


Listen to this article. This audio was generated by AI.

Facebook's parent company, Meta, revealed on Tuesday that it will begin labeling AI-generated images that users post to Facebook, Instagram and Threads.

The social media giant said it is working with other tech vendors to find a common technical standard that signals when content has been created with AI technology.

The tech and e-commerce company also revealed that it is building tools that identify invisible markers and can label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock. Meta already uses an invisible watermark and metadata for images generated with its software.

Meta's move, expected to take effect in the coming months, comes as media and tech organizations are working to combat AI-generated content intended to manipulate the upcoming 2024 elections globally, including the U.S. presidential election and congressional races in November.

It also comes after a blast of AI robocalls impersonating President Joe Biden were aimed at discouraging New Hampshire voters from voting in the state primary last month.

Useful first step

While Meta's decision serves as a welcome first step, it doesn't resolve the problem of AI-generated disinformation completely, said Kashyap Kompella, an analyst at RPA2AI Research.

"This will serve to apply brakes on the accelerating use of synthetic media for deception," Kompella said.

Tech vendors including Adobe have been moving in this direction with its participation in the Content Authenticity Initiative from the Coalition for Content Provenance and Authenticity, and use of the C2PA technical standard on content provenance, Kompella noted. The C2PA technical standard provides publishers, creators and consumers with the ability to trace the origin of various media types.

"Of course, this doesn't turn off the spigot completely," Kompella continued. "Effectively combating deepfakes calls for a multipronged approach -- user education, tools and technology standards, and platforms' commitment to act."

Some concerns

One concern with Meta's decision is that the social media giant is focused solely on images, said Neil Johnson, a professor of physics at George Washington University.

Whatever this is going to be is kind of a drop in the ocean.
Neil JohnsonProfessor of physics, George Washington University

AI-generated content commonly also includes audio and videos, he noted.

Moreover, Meta is only one of a handful of social media companies taking similar steps to identify AI-generated content on their platforms. X, formerly known as Twitter, for example, has yet to reveal if or how it will combat AI-generated content in the election year.

"Whatever this is going to be is kind of a drop in the ocean," Johnson said.

Even if Meta successfully identifies all the AI-generated images, it is easy to remove watermarks or metadata, he added. Merely labeling AI-generated content fails to protect against fake information online at scale across social media platforms.

"All you need to do is take a screenshot of a photo or an image, or resize it, or even sometimes upload it to certain social media, and it strips off that metadata," Johnson continued. "It's kind of useless."

Instead of focusing on labeling, Meta and other platform providers should concentrate on removing connections between their platforms and the platforms where bad actors create the AI-generated content, he said.

For example, simply using a fake-image detector to recognize that a malicious deepfake was originally created on the 4chan platform -- where the explicit and violent fake images of pop star Taylor Swift originated last month -- and deleting it wouldn't eliminate the image, because it could be embedded in countless other hyperlinks.

Gaining trust

However, Meta's decision is a good step toward helping the social media giant show users of its platform that it is aware of the disruptiveness and some of the adverse effects of new AI technologies, according to Sarah Kreps, John L. Wetherill professor of government and law at Cornell University.

"What social media platforms want to be able to do themselves is inoculate against [the threat]," Kreps said. "Especially Meta, which for some time now has had the perception of being behind and blindsided when it comes to how its platforms are being misused."

Also, steps such as these will help users to not lose trust in the platforms themselves, she added.

"These watermark technologies are not impervious to misuse," Kreps said. "The hope is that they can take a step in the direction of trying to provide users with more information -- what some people refer to as 'nutrition labels.'"

Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.

Next Steps

An explanation of inception scores

Dig Deeper on AI technologies