vulnerability disclosure
What is vulnerability disclosure?
Vulnerability disclosure is the practice of reporting security flaws in computer software or hardware. Security researchers, IT security teams, in-house developers, third-party developers and others who work with the vulnerable systems may disclose vulnerabilities directly to the parties responsible for the flawed systems.
Ensuring that software or hardware vendors can address vulnerabilities before bad actors can find and exploit them is crucial. Identifying such flaws is so important that bug bounties, or vulnerability rewards programs, which reward researchers for finding flaws, are often initiated along with internal code audits and penetration tests as part of an organization's vulnerability management strategy.
Challenges with vulnerability disclosure programs
Vulnerability disclosures can be controversial because vendors often prefer to wait until a patch or other form of mitigation is available before making the vulnerability public. However, researchers, cybersecurity professionals and enterprises whose sensitive data or systems may be at risk prefer that disclosures be made public as soon as possible.
Here's why the stakeholders involved often have different priorities regarding vulnerability disclosures:
- Vendors, developers or manufacturers of the vulnerable systems or services may prefer that vulnerabilities be disclosed only to themselves and made public only after the patches are introduced.
- Users of the vulnerable products or services may prefer that the systems they use are patched as quickly as possible.
- Security researchers who uncover the vulnerabilities may prefer that remediation be applied to vulnerabilities quickly so they may publish details of the vulnerabilities they have discovered. However, when a vulnerability cannot be patched before attackers begin exploiting it, disclosure is preferable if there are other ways to mitigate or eliminate the threat.
Types of vulnerability disclosures
The paths to vulnerability disclosure that an organization can take include the following.
Responsible disclosures
Responsible disclosure is one approach that vendors and researchers have used for many years. Under a responsible disclosure protocol, researchers tell the system providers about the vulnerability and provide vendors with reasonable timelines to investigate and fix them.
Then, they publicly disclose vulnerabilities once they have been patched. Typically, responsible disclosure guidelines allow vendors 60 to 120 business days to patch a vulnerability. Often, vendors negotiate with researchers to modify the schedule to allow more time to fix difficult flaws.
Coordinated vulnerability disclosures
In 2010, Microsoft attempted to reshape the disclosure landscape by introducing a new concept of coordinated disclosure referred to as Coordinated Vulnerability Disclosure (CVD). The Cybersecurity and Infrastructure Security Agency has since adopted CVD.
Under CVD, researchers and vendors work together to identify and fix the vulnerabilities and negotiate a mutually agreeable amount of time patching the product and informing the public. Researchers can also opt to disclose to a U.S. Computer Emergency Readiness Team (CERT), which reports privately to the vendor, or to a private third-party provider, which works with the vendor privately.
Self-disclosures
Self-disclosures occur when the manufacturers of products with vulnerabilities discover the flaws and make them public, usually simultaneously with publishing patches or other fixes.
Third-party disclosures
Third-party disclosures occur when the parties reporting the vulnerabilities are not the hardware, software, system owners, authors or rights holders.
Security researchers who inform the manufacturers of the vulnerability usually issue third-party disclosures. These disclosures may also involve a CERT.
Vendor disclosures
Vendor disclosures occur when researchers only report vulnerabilities to the application vendors, which then develop patches.
Full disclosures
In full disclosures, a vulnerability is publicly released in its entirety, often as soon as the details of the vulnerability are known.
Vulnerability disclosure policy guidelines
A vulnerability disclosure policy (VDP) provides straightforward guidelines for submitting security vulnerabilities to organizations. A VDP offers a way for people to report vulnerabilities in a company's products or services.
A VDP should contain the following components, according to the National Telecommunications and Information Administration:
- Brand promise. This enables a company to demonstrate its commitment to security to customers and others potentially affected by a vulnerability by assuring users and the public that safety and security are essential. The company describes what work it has done related to vulnerabilities and what it expects to do going forward.
- Initial program and scope. This indicates which systems and capabilities are fair game and which are off-limits to the people and groups that find and report new vulnerabilities. For example, a company may encourage submissions for sites it owns but explicitly exclude any customer websites hosted on its infrastructure.
- "We will take no legal action if." A company informs researchers about the activities or actions they take and whether they will or will not result in legal action.
- Communication mechanisms and process. This enables a company to identify how researchers submit vulnerability reports -- by using a secure web form or email, for example.
- Nonbinding submission preferences and prioritizations. These set expectations for preferences and priorities about how a company will evaluate reports. It also lets researchers know which issues are considered essential. Typically, an organization's support and engineering team maintain this dynamic document.
In their VDPs, companies can also let finders know when they can publicly talk about vulnerabilities. For example, an organization may state that a finder cannot publicly disclose the vulnerability with the following conditions:
- until it is fixed;
- until a certain amount of time has passed since a report was first submitted;
- until the finder has given the organization X days of notice; or
- except on a mutually agreed-upon timeline that may be modified as part of the process with the disclosing party.
Vulnerabilities reported to Carnegie Mellon University Software Engineering Institute's CERT are forwarded to the affected vendors "as soon as practical after we receive the report."
Currently, security researchers do not agree on what constitutes "a reasonable amount of time" to allow a vendor to patch a vulnerability before full public disclosure.
Most industry vendors, as well as Google's Project Zero team, recommend a 90-day deadline to fix a vulnerability before full public disclosure, with a seven-day requirement for critical security issues but fewer than seven days for critical vulnerabilities being actively exploited.
Disclosure deadlines can vary among vendors, researchers and other organizations. Vulnerabilities reported to the CERT Coordination Center are disclosed to the public 45 days after they are first reported, whether or not the affected vendors have issued patches or workarounds.
Extenuating circumstances, such as "active exploitation, threats of an especially serious (or trivial) nature or situations that require changes to an established standard," can affect CERTs' deadlines. The coordination center may disclose a software vulnerability before or after the 45-day period in some cases.
Vulnerability disclosure process
Although there is no formal industry standard when it comes to reporting vulnerabilities, disclosures typically follow the same basic steps:
- A researcher discovers a security vulnerability and determines its potential impact. The finder then documents the vulnerability's location via pieces of code or screenshots.
- The researcher develops a vulnerability advisory report detailing the vulnerability and including supporting evidence and a full-disclosure timeline. The researcher then securely submits this report to the vendor.
- The researcher usually gives the vendor a reasonable period to investigate and patch the vulnerability according to the advisory full-disclosure timeline.
- Once a patch is available or the timeline for disclosure -- and any extensions -- has elapsed, the researcher publishes a full-disclosure vulnerability analysis of the exploit, including a detailed explanation of the vulnerability, its impact and the resolution.
Branded vulnerabilities
Security researchers have begun branding their vulnerability disclosures, creating catchy vulnerability names, dedicated websites and social media accounts with information about the vulnerabilities. These often include academic papers describing the vulnerabilities and even custom-designed logos.
Prominently branded vulnerabilities of recent years include the following:
- ImageTragick, the name applied to a set of vulnerabilities in the open source ImageMagick library for processing images;
- Badlock, a flaw that affected almost all versions of Windows;
- HTTPoxy, a group of vulnerabilities in applications that use a Hypertext Transfer Protocol proxy; and
- KRACK, an attack on Wi-Fi Protected Access 2 authentication.
The information security community is divided on whether such efforts are appropriate. Researchers who promote branded vulnerabilities may be seen as attempting to advance their research, whether or not the vulnerabilities are serious.
Others take issue with branding when a well-supported public relations effort for a vulnerability distracts the public from other vulnerabilities that have been made public without extensive publicity campaigns.
Learn more about the top cybersecurity vulnerabilities, and check out our ultimate guide to cybersecurity incident response.