pixel_dreams - Fotolia
Q&A: Talking bug bounty programs with Bugcrowd's Casey Ellis
As bug bounty programs become more mainstream, Bugcrowd founder and CEO Casey Ellis offers insights into rewards, best practices and tips for getting the most bang for the buck.
After discovering and disclosing the Cloudbleed attack on Cloudflare's content delivery network earlier this year, Google Project Zero researcher Tavis Ormandy wrote that Cloudflare's bug bounty program, offering a T-shirt as its top prize, "did not convey to me that they take the program seriously."
On the other hand, last year, Tod Beardsley, director of research at Boston-based Rapid7, told SearchSecurity that bug bounties offering "just props and kudos and T-shirts" could be just as effective -- if not more so -- than programs that awarded cash.
As Google's Project Zero Prize showed when it uncovered exactly zero bugs with its big bug bounty program searching for Android remote exploits and a top prize of $200,000, rewards do not always guarantee the discovery of new bugs -- but that doesn't mean a bug bounty program that fails to find any bugs is a failure.
Casey Ellis, co-founder and CEO of Bugcrowd Inc., based in San Francisco, explained the apparent paradox when he sat down with SearchSecurity earlier this year to talk about bug bounty program rewards, as well as what the future holds for the bug bounty concept as it moves into the mainstream.
Editor's note: This transcript has been edited for clarity and length.
Do bug bounty programs need to promise big cash rewards, or can a bug bounty program succeed by giving out props and kudos and T-shirts, as Beardsley has said?
Casey Ellis: It is true that if you extend an invitation out to the research community, even without any reward or thanks at all, there are organizations that get a benefit from the white hat hackers at the table who want to make the internet safer and help and who find things and let them know. And that information and the work that goes into finding that information are valuable. But we can access it to a limited extent, I believe, for free.
If you're talking about sending out T-shirts or putting people up in a hall of fame, giving them sort of recognition that they can use to get the status and feel good about what they've just done, then that's really important. But then, they're also starting to build their reputation and their credibility as a skilled professional in the industry; that's all the better, and that forms the starting point for the spectrum of incentives that are available in this market.
Should bug bounty programs set rewards commensurate with the impact the bugs have?
Ellis: There are two sides to the value of the information that's transacted when someone like Tavis [Ormandy] -- or anyone else -- submits a vulnerability. One is the side of the loss and the actual value that that information brings to the company. You can think about it in terms of loss prevention or whatever else, and you could argue that, 'Yeah, this bug is valuable. It's worth more than a T-shirt.' At the same time, you can argue -- like down the line of what I think Tod [Beardsley] was saying -- 'This is a marketplace.'
Ultimately, the value of whatever's being transacted is set by the seller and the buyer on a case-by-case basis. So, if you're going out there and saying, 'No, we're not going to pay. We're going to give things away for, you know, reputation or whatever else, but we're not going to extend cash.' If you still get that information, then the value of the information you're getting is a T-shirt, or it's a listing in a hall of fame. And that's actually set, at that point, by the supply side, by the hackers that have come up to the table trying to help out.
What we believe, and what we try to push companies toward as an organization, is that you should be paying for this information. This is valuable.
Beardsley told us, 'The bad guys don't care about the things that are in and out of scope.' Is there a way to reward bug bounty submitters who are thinking outside the box and producing results, but aren't finding bugs?
Ellis: There is -- and to an extent, I agree with that. But, at the same time, the painful reality of someone running a security program within a business is that you need to be able to distinguish good guys from bad guys, and to be able to exploit issues that you see potentially compromising data. That's one of the reasons that people limit scope not only in terms of what targets they're going to invite or allow a security researcher to hit, but what they can do once they've found them. They might have things on the back end that are set up in a way that escalates issues if it looks like someone's exfiltrating data or stealing passwords of people or whatever else. And that's the company's prerogative; that's a function of their ability to defend themselves.
The concept of bug bounty programs themselves started in 1995 with Netscape. It got popular in 2010 with Facebook, and we've seen a pretty strong kind of boost in adoption over the past three years. But right now, there are only 1,500 to maybe 2,000 companies that are actively incentivizing people to come and disclose vulnerabilities to them, which means that there's millions of other organizations that aren't doing this at this point. And they're not prepared for it. And they don't know how to handle it. And their concept of security is that they can control the things that they can see. And making sure that that control is maintained throughout the course of the security program is really important to them.
I fundamentally agree with what Tod said around the idea that hackers don't respect scope. But, at the same time, you can't just expect an understaffed security team to be able to cope with the internet suddenly being invited to do whatever it wants if they're not ready for it. That is something that's going to cause company issues; it's going to cause issues from an IT standpoint, with all sorts of trickle-down problems if people jump the gun on that a little prematurely.
It's one of the reasons that we advocate the whole idea of 'crawl, walk, run.' Start slow, get used to interacting with people that are outside the four walls of your organization. Figure out how you're going to respond. Figure out how you're going to fix the vulnerabilities that are discovered. And then, over time, you ramp up and start to extend that to the point that you can actually make your program public and invite everyone in.
What else should we know about bug bounty programs?
Ellis: This is a concept that started off in the tech industry with a bunch of early adopters and people that are very tolerant of risks, and they want to adopt the latest, greatest thing, which is great because it introduces new concepts to the broader market. But it's also not as great, because they're so far out on the fringe of innovation that if you think of it like a spectrum, you put Facebook down at one end and a company like JPMorgan Chase up the other. What Facebook does is interesting to a bank or to a defense company or whatever else, but it's not super-relevant, because Facebook can afford to take risks as a business that more traditional organizations can't.
One of the questions around this industry in general has been, 'Is this ever going to break out of the tech bubble and become something that everyone does?' And what we've seen over the past six months is launches and people adopting the bug bounty model in a way that validates that, no, this is actually a horizontal solution to the problem of discovering vulnerabilities before the bad guys do.
It sounds like you're getting more mainstream, nontech companies to participate. Can you give any examples of consumer companies that are working on this kind of thing?
Ellis: We run private programs, as well as public programs of people that know what a bug bounty program is, and they want one. The private programs are for people that want to appropriate the bug bounty concepts to get better return on their investment -- the things they're already doing, like penetration testing, using continuous security assessment companies. And part of what we're doing is saying, 'Oh, we can fill the gap. Those things aren't bad that you're already doing, but there's a gap that's left by that model that we can basically disrupt and get you ahead of.' So, in terms of examples that are public, Mastercard is a really good one. They're an innovative organization, but you wouldn't necessarily put them in the same bucket as Facebook, right?
We're at a point where bug bounties are starting to become normal. I actually said this to my team when we were down in Black Hat and DefCon last year: 'Enjoy being the interesting company at the conference this year, because next year, we're going to be kind of passé.' Journalists are not writing about [bug bounty] launches as frequently as they used to, because launches have become norm. It's not newsworthy anymore. And the trend that kind of overweighs that becomes the newsworthy thing. It's like [saying], 'Oh, wow -- this is actually breaking out of being something that radical, cutting-edge, innovative companies do to something that is just normal.' And that, in and of itself, is a pretty interesting track.
What's the most important thing if you're doing a bug bounty?
Casey EllisCEO, Bugcrowd
Ellis: Caring about what you receive in the crowd, without a doubt. The whole idea of making sure that when you start one of these things it's not just about the press release and going out to market and saying that you're awesome because you're on a bug bounty program. You have to actually care about what the research is telling you and fix it. That becomes, really, the core metric of whether an organization that's running a program actually is deriving real security value, as judged by the researchers that are participating and submitting bugs -- and by us in the middle.
I think it's still, to an extent, cool to go out to the market and say, 'Yeah, we're starting a bug bounty program, and we're really focused on custom security and whatnot.' The proof of the pudding is in the eating. It's one of those things where, over time, you can see based on the behavior of the organization whether or not they actually care to make sure that their customer data is secure.
What else should people be asking about bug bounty programs and how they work and how they can be made to work?
Ellis: The thing that we've tried to do as a company is fit the concept and the value and the power of bug bounty programs and crowdsourcing into contexts that already exist. A lot of the business we do is with folks that are looking for better ROI from the penetration desk, or folks that have been using a continuous security scanning vendor for years who want to get better bang for buck out of their stance. And that bang for buck comes in the form of assurance that we're deploying the most effective possible way of identifying vulnerabilities.
And if it finds things, if people identify issues and they get paid, that's good. And if they don't find things, then that's good, too. Because what it means is that we've achieved resilience as an organization that can withstand a best-of-breed approach to try and break it.
How likely is it to have an effective bug bounty program that doesn't find bugs?
Ellis: Humans write software, and humans make mistakes. And you've got humans who are also incentivized to attack software for whatever gain they're interested in. There's all this human stuff in the mix that you have to figure out how to negotiate, which is where we come in, because we try to basically bridge the gap and try to play both sides.
In every program that we've ever run, there's a huge bump at the front end of it. They start a program, and there's a ton of things that get found early on, even if they've been doing pen tests, or started code analysis or whatever else. And the phrase I use to describe that is 'assurance debt.'
So, for them, it's assurance that they've been trying to apply the traditional tools that aren't capable of finding the things that we're able to find. And there's debt that's incurred because of that. When they engage the crowd, all of a sudden, they've got the power to find those things and basically take those unknown unknowns to being known so they can fix them.
There's this initial hump of vulnerabilities that get discovered, but then after that, that's where the rubber really hits the road. It's important to make sure that you know they're creating a feedback loop between folks that think adversarially and folks that don't who are actually responsible for building the software and creating that type of service in the first place.