How to determine out-of-scope bug bounty assets

What happens when a security researcher discovers a bug in an out-of-scope asset? Learn how to handle bug bounty scope in this excerpt from 'Corporate Cybersecurity.'

Applications and networks are rarely hardened as well as they should be, so why not incentivize bug discovery from third parties? Organizations that start their own bug bounty program can pay security researchers to submit any bugs they discover instead of them publishing them online or having attackers find them first.

Those interested in getting started can use Corporate Cybersecurity: Identifying Risks and the Bug Bounty Program by author and security researcher John Jackson. One aspect to consider when creating a program is what is included in it. It is important to note that not all bugs found are always included in a bug bounty policy. Paying for these out-of-scope bugs, Jackson said, is important.

"If you have a remote code execution on a server holding 100,000 PII [personally identifiable information] records, then 'out of scope' doesn't really mean anything anymore," Jackson said. "Attackers don't care about out of scope. If a hacker touches that data and leaks it, you're looking at paying a lot of money per record."

In the following excerpt from Chapter 4, Jackson explains how to determine what out of scope means while not being bound to it at the same time.

Check out a Q&A with Jackson to learn more about bug bounty programs and how they differ from vulnerability disclosure programs.

Cover image of Corporate CybersecurityClick for more information on
Corporate Cybersecurity:
Identifying Risks and the
Bug Bounty Program
.

4.9 When Is an Asset Really Out of Scope?

Overall, the objective of out-of-scope reporting is never to discourage a security researcher from reporting, but to ensure that baseline restrictions are placed to prevent legal or financial impact. With that being considered, ideally program managers should be fair in the evaluation of vulnerabilities that are in and out of scope.

A security researcher pulls up a list of subdomains for the enterprise and finds a lot of assets, some of which are explicitly stated as out of scope. Researchers are curious, be it by nature or acquired in their lives. They want to know how components work. In this specific scenario, assume that it's well known that this asset is out of scope (defined in the out-of- scope section of the enterprise bug bounty program): should it be tested on?

If the answer that came to mind was no, it can be simultaneously the right and wrong answer. To demonstrate scenarios that bug bounty program managers may run into, consider an example of a (P1) -- critical vulnerability -- and a (P4) -- low vulnerability.

(P1) -- Critical: Admin Account Takeover

In this scenario, while testing on various subdomains, the researcher notices something strange. It appears that one of the subdomains that was found was "admin.prod.example. com." The researcher reviews the scope and sees that *.prod.example.com is listed as out of scope. Nonetheless, curiosity takes over, and to the researcher's surprise, they find an exposed client side Laravel Debugbar. At this point, they have to make a choice, "To go out of scope or not?"

Now, in the case of the Laravel Debugbar, imagine that the researcher intercepts admin credentials and is able to log in to the web application that resides on the admin end- point. The web application portal has internal employee email addresses, sensitive work- flow information, client addresses and other contact information, and so on. A moral dilemma occurs and the researcher now has to decide if it's worth it to even report a vulnerability that is identified as "out of scope."

In most circumstances, the researcher will report the vulnerability, especially if it pertains to account takeover or PII (personally identifiable information).

Now, review the next scenario, keeping the P1 example in mind.

(P4) -- Low: Apache Server Status Page Leaking Server Information

Comparably, the same security researcher may have stumbled upon a subdomain named "sv1.prod.example.com." As our previous example stated, *.prod.example.com is out of scope. When loading the web page, the researcher finds the/server-status page of the webserver and can now see specific information, such as versioning, URL paths that it records, CPU usage, and some internal IP addresses.

While this may seem like a serious vulnerability to an amateur researcher, the severity of such vulnerability completely depends on the level of information that can be accessed. For example, if this server-status page records all root domain endpoints, it's possible that the researcher will find an endpoint that reveals sensitive information after navigating to it. However, if no sensitive information is revealed, whether outright or through recorded endpoints, the vulnerability will be treated as a low.

The researcher once again will have to make a choice, to report it or not. In many instances it may just be reported out of love for doing the right thing. The researcher knows they are in the hands of a program manager to decide if they are going to get paid or not. Nonetheless, a bug bounty program manager will have to make the right decision on paying the hacker. Alternatively, with the utilization of bug bounty platform providers, program managers can resolve the report with no monetary awards that can still be beneficial to the researcher's credibility.

4.10 The House Wins -- Or Does It?

The ultimate decision of many bug bounty programs that currently operate typically resides with not paying the security researcher if it's out of scope. Some that have been overheard reside within the school of thought of "We don't want to encourage our researchers to hack out-of-scope assets just to get a bounty" or "If we pay a researcher for one out-of-scope bounty, we will have to pay them for every out-of-scope bounty."

A major issue with adverse thinking toward out-of-scope research is that it can lead to key research being overlooked. When determining whether paying for a reported vulnerability appears to be the right move, ask the following questions:

  1. What is the level of impact of this vulnerability? Could it lead to extremely sensitive information leakage?
  2. If no sensitive information is leaked, could the vulnerability severely damage company reputation?
  3. Do we value the researcher and want to reward them?

One size does not fit all in terms of paying a security researcher. Remember, when managing a program, the rules are made by the program manager. While transparency with leadership must always be exercised, caring is the key to developing a strong and loyal research platform and building brand reputation among the cybersecurity community. Let's take our earlier examples of the critical and low vulnerabilities and perform a dry run of answering the above questions.

P1 -- Admin Account Takeover

  1. What is the level of impact of this vulnerability? Could it lead to extremely sensitive information leakage? Yes, the researcher has found multiple pieces of PII and information that can be valuable if sold to or obtained by a threat actor.
  2. If no sensitive information is leaked, could the vulnerability severely damage company reputation? Even if the researcher could not leak sensitive information, this admin panel contains functionality that could take down key parts of the business, resulting in monetary impact, in turn resulting in reputational damage.
  3. Do we value the researcher and want to reward them? Personally, this researcher has never reported to us before, but it would be nice to reward them for their efforts.

In this admin takeover example, it's easily determined that a lot of damage could occur. Company information could be sold, or there could be direct impact to business continuity, creating a loss in revenue. With that being said, the right thing to do would be to pay the security researcher. In a circumstance such as the one described, there would be far more loss in a breach scenario than there would ever be if the researcher were adequately paid. Programs should consider setting a moral example and paying for the vulnerability findings.

P4 -- Apache Server Status Page Leaking Server Information

  1. What is the level of impact of this vulnerability? Could it lead to extremely sensitive information leakage? No, there's no sensitive information leakage. Several internal IPs are displayed, but no impact.
  2. If no sensitive information is leaked, could the vulnerability severely damage company reputation? No, there's no reputational damage.
  3. Do we value the researcher and want to reward them? This researcher has reported to us many times before. However, we don't see any severe impact.

When a lower-end vulnerability is reported out of scope, it's not technically immoral to refuse to reward hackers for their findings. Expectations that were set out early in the process are fair game. However, consider paying researchers a mild bonus or sending a t-shirt. A little bit of gratitude, especially for someone who went out of their way to help, goes a long way.

4.11 Fair Judgment on Bounties

It's easy to stray from the moral path as a program manager. At times, it can seem enticing to claim plausible deniability for an issue. If using a managed program, the triagers only know your environment as well as they possibly can from an external perspective. Bug bounty programs are only as efficient as the engineers and managers that run the program. Similarly, programs can only be as honest as the parties responsible for validating vulnerabilities.

There are many blind exploitation paths that can easily be denied, or are questionable from the perspective of an attacker, and it takes adequate communication between the researcher and engineer to confirm the existence of some vulnerabilities.

If a researcher were to stumble across a GitHub repository for an organization and find SQL credentials, it would be an extremely valuable finding, especially if they could log in. In the instance that the login is restricted to the internal network, they may not have a way to immediately control the server. At times, it may take internal verification. Verification may fall out of the line of sight of the triage team and the researcher.

About the author
John Jackson is a senior offensive security consultant and founder of Sakura Samurai 桜の侍, a hacking group dedicated to legal hacking. He is most known for multiple CVE and government/enterprise security research contributions. Jackson has contributed to the threat and vulnerability space, disclosing several pieces of cyber vulnerability research and assisting in resolution for the greater good. He continues to work on several projects and collaborates with other researchers to identify major cyber vulnerabilities.

Dig Deeper on Security operations and management