pressmaster - Fotolia

Bugcrowd CTO on the need for responsible disclosure policy, 'good faith'

Bugcrowd founder and CTO Casey Ellis talks about his concerns that the era of 'good faith' between security researchers and enterprises is in jeopardy.

Bug bounties and crowdsourced security research have become big business over the last several years, but Casey Ellis believes the "good faith" between the infosec community and enterprises is in jeopardy.

Ellis, who is chairman, founder and CTO of San Francisco-based crowdsourced security testing platform Bugcrowd, has watched an increasing number of enterprises and government agencies launch bug bounties since the company was founded in 2012. But as those bounties have received more attention and participation from the infosec community, Ellis said he's started to notice growing friction in the vulnerability research process. As such, he said he believes the need for responsible disclosure policy is greater than ever.

Ellis talked at RSA Conference 2018 about what's causing the fiction between companies and security researchers and how better vulnerability disclosure policies can improve the situation. Here is part one of the interview with Ellis.

Editor's note: This interview has been edited for clarity and length.

How has your space changed over the last six years since Bugcrowd began?

Casey Ellis: Bugcrowd was the first kind of intermediary platform. We didn't invent bug bounties, but we basically kicked off this whole idea of running it as a service and building out a platform and a team of people in the middle to help out. We kicked that off back in 2012 from Sydney and then came here. And it's been a pretty interesting six years.

Based on the conversations I've been having with a few folks from the government side, from the legislative and legal side over the past couple of weeks, this industry is operated on good faith quite effectively -- almost, like, surprisingly well, given what we're doing for the past 20 or 30 years. It's not without incident, but when you think about the scale of what's going on, it's worked quite well.

But what we're seeing is this slowly but steadily increasing cadence where we test the limits of that good faith. And things get misunderstood or misinterpreted, and you have something like what happened at Panera or DJI or Uber. My view of this is it's really just a function of math -- the more people that do this, the more probable those issues become. And you hit a point where you need to start thinking about proper legislative change.

One of the things that we're working on pretty heavily is an open source responsible disclosure policy or vulnerability disclosure policy that we actually started back in 2014. The idea was, how do you create a brief for someone who lives in India and who doesn't have English as a first language, that's legally protective enough for the researcher to feel comfortable and legally protective enough for the customer to feel comfortable and keep it as short as possible?

They're a pretty interesting set of constraints, but we did it. It got some traction over time, but, frankly, this was before good faith started to get tested, so it didn't exactly blow up.

Now, what we're doing is working with the DOJ [U.S. Department of Justice] to basically update that policy and refresh it with the recent stuff around things like DMCA [Digital Millenium Copyright Act] and carve-outs that have happened in the IoT and healthcare space. We also wanted to make sure that the core intent of CFAA [Computer Fraud and Abuse Act], which is the overarching law here in the U.S., is addressed so that there's safe harbor for the hackers.

And I think these laws have been written -- or were written -- on the premise that if you're a hacker, then you're automatically a bad person. The CFAA didn't really have a consideration in the legal language for, say, the digital locksmiths. It's written for the burglars. And like I said before, I think it's actually worked surprisingly well in terms of extending good faith for the past however many years. But we're starting to get to a point where it needs a reboot.

Why do you think that good faith is being tested now? Is it just because there are so many attacks and so many exploits and companies are just getting scared? Or, is it something else?

Ellis: I refer to what we do sometimes as 'unintended consequences as a service.' People don't plan to create security vulnerabilities. No one intends to do the stuff that we're ultimately tasked with trying to discover, so it's a messy process. It's a messy space, and it's incredibly valuable. And we can add a ton of control to that.

But if your entire purpose is to find things that are unknown and unintended, that creates a lot of variability in how that can happen. If good faith extends around it and protects the people that are participating, that's awesome. But as it happens more frequently, then you can expect that to break a little bit more frequently, as well.

And I'm not preaching doom and gloom on this one. I actually don't think it's critical yet, but I do see it coming over the hill. And I actually think it's something that would be better addressed proactively, rather than reactively.

It does seem like there are more of these incidents today, though.

Ellis: Well, here's the other thing that's happening: When you think about good faith security research, if you're testing a hosted platform without authorization, it is still technically illegal. But it's become a lot like jaywalking. Now, jaywalking doesn't have as noble a purpose as security research, but it's similar in terms of how it's passed off.

The problem is if the company on the receiving end is shocked or upset or inexperienced, or they just do an image search on Google for hackers and see a bunch of scary-looking faces and assume that this person trying to help them is actually evil. There are all sorts of different things that can cause it.

I think, ultimately, the solution to this is really what we preach around vulnerability disclosure, which is it should be something that everyone does. It should become -- and I think it will become at some point over the next year or two -- basically accepted corporate social responsibility for companies.

It's not just about the effectiveness of the model; it's about the fact that this is a contract that you have with the internet that demonstrates to your customers that you take the risk around their data seriously. And I can't wait for that to happen, because that's going to create a conversation between defenders and breakers at a scale that it hasn't happened at before.

Let's talk about the other side of responsible disclosure policy. Do you worry there are too many instances of branded vulnerabilities where the researchers or vendors hype up their findings and even exaggerate them for marketing purposes? A lot of people have criticized that kind of behavior and say it taints the infosec research community.

I refer to what we do sometimes as 'unintended consequences as a service.' People don't plan to create security vulnerabilities. No one intends to do the stuff that we're ultimately tasked with trying to discover, so it's a messy process.
Casey EllisCTO, Bugcrowd

Ellis: Yes, I agree with that. I'm not saying whether that's right or wrong, but I definitely think it's tacky, and I definitely think that it taints [the community]. What it very easily sets up is not just [the accusation] that, 'Oh, you guys are just hungry for money,' in a market that doesn't understand what we actually do as security researchers. It also has potential to double back down on this adversarial expectation that people have when it comes to the hacking community.

This sort of [vulnerability] overhype can reinforce that adversarial thinking. And I think there's also a lot of the "LOL, stupid bug, stupid vendor" [from researchers]. And that kind of stuff drives me nuts, because vulnerabilities happen. They're a product of human creativity.

There's also been vulnerability research that has apparent financial motives, like the AMD flaws report and CTS Labs.

Ellis: Yes. Stuff like that is bad. It's bad, and Muddy Waters [Capital] and MedSec is another example. And that's a double-edged sword, because I think that [research] put a lot of heat onto the medical device industry to fix those issues, which ultimately is a good thing. But was it shady? Yes, I think it was kind of shady.

So, it's the [negative] perception. What I'm coming back to is this whole idea of if this is an adversarial conversation instead of a partnership, then it becomes harder to execute well.

Paul Kocher talked about the disclosure process for Meltdown and Spectre in an [RSA Conference session]. He didn't go into specifics, but he did say the process was messy and the way the vendors handled it was problematic, in part, because they obviously didn't expect the bugs and weren't prepared to deal with them.

Ellis: Precisely. This goes back to what I was saying before. They didn't want the bugs to be there in the first place. The whole fact that this is happening is inconvenient and wasn't planned. The fact that they're finding out about it and they can fix it? That part's awesome, and I think the good vendors are the ones that are leaning into that and trying to work out how to operationalize it within their business. But we are talking about stuff that, theoretically, isn't meant to happen, and that makes these sorts of conversations difficult.

And there's two parts to the process: the disclosure from the hacker to the company and then the disclosure from the company to the stakeholders. The main reason that second piece is not widely adopted is because of vulnerability shaming and this whole idea of it being dirty laundry that they're putting out for everyone to see. That's sort of understandable, but it's also kind of irrational, because bugs happen. And they're always going to happen.

Dig Deeper on Risk management