sdecoret - stock.adobe.com
OpenAI CEO Sam Altman weighs in on content authentication
OpenAI says it's working on new tools to identify content created by its generative AI tools, as Congress weighs legislation to protect individuals against AI-generated replicas.
OpenAI CEO Sam Altman said while the company's latest update is about deploying new tools for identifying AI-generated content, the future could involve shifting the focus to verifying content not generated by AI.
During an online discussion with the Brookings Institution, Altman emphasized the significance of authenticating non-AI-generated content, particularly in critical areas like election information. Indeed, President Joe Biden in his executive order on AI issued in 2023 tasked the Department of Commerce with developing guidance for authenticating official content.
"This idea that if someone in an election has a really important message, they cryptographically sign it and say, 'I said this, here's the proof and you can check that,'" Altman said during the Brookings Institution discussion. "That seems to me like a reasonably likely part of the future for certain kinds of messages."
Still, the company remains focused on identifying AI-generated content. OpenAI said Tuesday it is developing new methods for enhancing digital content integrity, including tamper-resistant AI watermarking to flag audio or video content with an invisible signal that's difficult to remove. The company is also working on detection classifiers, tools that use AI to assess whether content originated from generative AI models.
Altman talks AI and the 2024 election
Congress has become increasingly concerned about AI's impact on the 2024 U.S. presidential election, weighing multiple bills targeting AI deepfakes and implementing protections for individuals' voice and likeness from AI-generated replicas.
Companies responding to such concerns have taken action to identify AI-generated content, including Meta -- which said it plans to label AI-generated content on Facebook and Instagram -- and OpenAI, with its development of new tools for content authentication. Altman said companies have been building technological defenses against misinformation.
Sam AltmanCEO, OpenAI
When it comes to deterring election interference caused by AI-generated content, Altman said he's pleased with how AI companies and platforms are taking the issue seriously. What concerns him even more than public misinformation is one-on-one persuasion between AI and an individual, which he said is harder to detect and potentially more harmful.
"It's good that we are worried, and that's leading to vigilance in the industry," he said.
OpenAI said Tuesday it is opening applications for access to its image detection classifier for its first group of testers. The image detection classifier predicts the likelihood of an image being generated by OpenAI's latest image model, Dall-E 3. Internal testing shows that the image detection classifier correctly identified 98% of images generated by Dall-E 3, according to OpenAI's news release.
The company also said it is joining the steering committee of the Coalition for Content Provenance and Authenticity (C2PA), which is a widely used standard for certifying digital content and proving it came from a particular source.
Beyond concerns about how AI could affect election misinformation, Altman said he expects AI's impact on the economy and ensuring U.S. leadership to be a focus for U.S. candidates and should be a top-of-mind issue for voters.
Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.