Getty Images/iStockphoto

OpenAI launches bug bounty program with Bugcrowd

ChatGPT publisher OpenAI said its new Bugcrowd bug bounty program will not accept submissions involving "issues related to the content of model prompts and responses."

Artificial intelligence research company OpenAI on Tuesday announced the launch of a new bug bounty program on Bugcrowd.

Founded in 2015, OpenAI has in recent months become a prominent entity in the field of AI tech. Its product line includes ChatGPT, Dall-E and an API used in white-label enterprise AI products. Microsoft announced early this year a multiyear, multibillion-dollar investment in OpenAI in order to bring its tech to Microsoft products.

OpenAI announced the program via a blog post on its website. Referred to as the company's "commitment to secure AI," the program accepts security vulnerability submissions relating to OpenAI's API, ChatGPT, third-party corporate accounts belonging to the company and more.

"We believe that transparency and collaboration are crucial to addressing this reality," the blog post read. "That's why we are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems. We are excited to build on our coordinated disclosure commitments by offering incentives for qualifying vulnerability information. Your expertise and vigilance will have a direct impact on keeping our systems and users secure."

Generally speaking, security-related vulnerabilities are considered in-scope for the program, as is API key exposure. However, as the program's Bugcrowd page explained, "issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded unless they have an additional directly verifiable security impact on an in-scope service."

"Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed," the Bugcrowd page read. "Addressing these issues often involves substantial research and a broader approach. To ensure that these concerns are properly addressed, please report them using the appropriate form, rather than submitting them through the bug bounty program. Reporting them in the right place allows our researchers to use these reports to improve the model."

In other words, issues involving ChatGPT telling its users "how to do bad things" are considered out of scope. Several security researchers have recently discovered bypasses or "jailbreaks" for ChatGPT's safeguards that allow them to generate malicious code, for example.

Also out of scope are attacks involving stolen or leaked credentials, vulnerabilities involving dormant open source projects, social engineering attacks, and many other examples listed on Bugcrowd.

Vulnerability rewards per individual flaw have a payout of $200 to $6,500 based on severity and impact, with a maximum researcher payout of $20,000. However, the program forbids researchers from publicly disclosing vulnerabilities submitted to the program. Nondisclosure agreements (NDAs) have long been a source of frustration for the security research community, as they deprive bug hunters of credit and allow vendors to silently patch flaws without proper public disclosure.

TechTarget Editorial contacted OpenAI for additional insight into its program's payment scale and disclosure practices, but the company declined to comment.

Katie Moussouris, founder and CEO of Luta Security, told TechTarget Editorial that the nondisclosure requirement is "shortsighted and does not serve the public greater good, nor does it serve OpenAI."

"The best researchers refuse to sign NDAs in exchange for pay that isn't guaranteed at all," she said. "Any issues they choose not to fix could still pose a significant risk, and the public should be informed."

Alexander Culafi is a writer, journalist and podcaster based in Boston.

Next Steps

Bugcrowd CTO talks hacker feedback, vulnerability disclosure

Dig Deeper on Threats and vulnerabilities