Getty Images/iStockphoto

Researchers warn devs of vulnerabilities in ChatGPT plugins

OpenAI and two third-party providers fixed vulnerabilities in the experimental ChatGPT plugins framework, but Salt Security researchers caution devs that security risks persist.

Salt Labs researchers are calling on OpenAI to clarify its documentation and on developers to be more aware of potential security vulnerabilities when working with ChatGPT.

ChatGPT plugins were introduced in March 2023 as an experimental means to connect OpenAI's ChatGPT large language model with third-party applications. Developers of plugins could use them to interact with outside APIs to access data resources such as stock prices and sports scores and connect to other services to perform actions, such as booking a flight or ordering food. ChatGPT plugins are now being supplanted by custom GPTs, introduced in November 2023.

However, some of the earlier ChatGPT plugins remain in use during the transition to custom GPTs, according to security researchers at Salt Labs, a division of API security vendor Salt Security. In June 2023, before custom GPTs were introduced, Salt Labs researchers uncovered an implementation of OAuth authentication for ChatGPT plugins that could potentially have allowed attackers to install a malicious plugin on a user's account and access sensitive data. The researchers also found that attackers could potentially use OAuth redirect manipulation through third-party ChatGPT plugins to steal credentials and access user accounts connected to ChatGPT, including on GitHub.

Two third-party plugin providers, PluginLab.ai and Kesem AI, were notified by Salt Labs and fixed their vulnerabilities as well. The researchers said they did not see evidence that these vulnerabilities had been exploited.

"It's theoretically possible, but very theoretical," said Yaniv Balmas, head researcher at Salt Labs, in an interview with TechTarget Editorial this week. "It's hard as security researchers for us to track all the developments in generative AI, and that statement is also true for attackers, but will they get there? It's a question of not 'if' but 'when.'"

The researchers also acknowledged that custom GPTs do a better job of warning users of the potential risks of connecting to third-party applications, and that OpenAI appears poised to deprecate ChatGPT plugins in favor of custom GPTs. But custom GPTs' actions "build on plugins," according to OpenAI documentation, and Salt Security has uncovered vulnerabilities within that framework as well.

Salt researchers plan to disclose those vulnerabilities following its customary disclosure process to allow OpenAI to fix the issue, Balmas said, and declined to offer further details for now.

"We can't say anything about it other than the fact that [custom GPTs] are better secured than plugins, but definitely not hermetic," he said. "The impact is very similar to the impact that we show in these vulnerabilities."

Industry analysts said vulnerabilities disclosed by Salt Labs could potentially have major impacts if poorly configured ChatGPT plugins are given access to sensitive applications, especially code repositories in GitHub.

It makes it easy for developers to be vulnerable to attack as services like ChatGPT plugins can interact with your sensitive data. … That sort of unfettered access for adversaries gives them the opportunity to wreak serious havoc.
Tom ThiemannAnalyst, Enterprise Strategy Group

"It deserves attention because it makes it easy for developers to be vulnerable to attack, as services like ChatGPT plugins can interact with your sensitive data," said Todd Thiemann, an analyst at TechTarget's Enterprise Strategy Group (ESG). "Depending on the plugin, that may also give permission to access your private accounts on GitHub or Google Drive. That sort of unfettered access for adversaries gives them the opportunity to wreak serious havoc."

One analyst, however, said such vulnerabilities are to be expected with any new technology where developers are trying to push out minimum viable products.

"OAuth redirect manipulation is real and should be fixed, but it's sort of a well-trod problem, and I'm realistic that with every new technology, there's got to be a period of testing the waters," said Daniel Kennedy, an analyst at 451 Research, a division of S&P Global. "There are going to be vulnerabilities, and then they'll be mitigated -- there'll be talks at Black Hat about, 'I got all these old tricks to work with this new platform.'"

Salt Labs screenshot of attacker GitHub access via ChatGPT plugins
A screenshot from Salt Labs researchers shows the worst-case scenario: an An attacker using ChatGPT plugin vulnerabilities to access a private GitHub repository.

Salt Labs calls for OpenAI doc clarification

OAuth redirect manipulation is a security issue that long precedes ChatGPT and generative AI. If API developers that use OAuth aren't sure to verify whether redirect URLs are legitimate before granting access, attackers can create malicious URLs, trick a user into clicking them, and gain access to the API and its dependent systems.

However, such attacks require a user to click the malicious link. The vulnerability discovered in the ChatGPT plugin framework implementation of OAuth potentially could have circumvented even this step, because the initial implementation did not use a random state value to prevent cross-site request forgery. Instead, it used a value that could have been guessed by an attacker. That has since been fixed, Balmas said.

However, a Salt Labs blog post issued this week said OpenAI should clarify its documentation about how to implement authentication in ChatGPT plugins and actions to emphasize the potential security risks of misconfiguration.

"We believe that some of these vulnerabilities could be avoided if developers were more aware of the risk," the blog post reads.

Such awareness is warranted as generative AI adoption accelerates, according to ESG's Thiemann.

"It is good that a vendor like Salt Labs brings awareness to these vulnerabilities, because organizations need to understand the risk and implement the right policies to secure usage," he said.

OpenAI did not respond to requests for comment as of press time.

Beth Pariseau, senior news writer for TechTarget Editorial, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out on X @PariseauTT.

Next Steps

OpenAI details how threat actors are abusing ChatGPT

Dig Deeper on Software design and development