adimas - Fotolia

Tip

Cloud security lessons to learn from the Uber data breach

Any organization that uses cloud services can learn something from the 2016 Uber data breach. Expert Ed Moyle explains the main takeaways from the massive breach.

For most technology practitioners -- particularly those in explicitly security-focused positions -- a breach of customer information can seem like the worst-case scenario.

For example, while it is unclear what the full spectrum of fallout will be associated with the recently discovered 2016 Uber data breach -- which affected 57 million users -- negative media attention, emerging lawsuits, and other negative and potentially long-term consequences have already arisen.

While Uber has received quite a bit of criticism associated with several aspects of its response -- the timeline of public notification; the actions taken at the time to keep the situation quiet, including the apparent payment of $100,000 to attackers; etc. -- the truth is that the root cause for the whole debacle was something relatively banal.

At the end of the day, it was a problem with credential management exacerbated by production developer access. These issues are not rocket science, but it is important to note that, in this case, cloud usage certainly played a role.

In fact, the specific events that led to the Uber data breach should serve as a wake-up call for any organization that uses cloud services. Why? Because what happened to Uber could easily happen to other companies. This means, if you're a security practitioner, taking steps ahead of time to minimize the likelihood of this happening at your organization is important. It should be noted that this isn't always easy to accomplish, but fortunately, there are a few strategies that can help.

What happened in the Uber data breach, and why does it matter?

To understand how to prevent this type of issue, it's important to clearly understand what caused the issue in the first place. In this case, reporting in the trade press suggests that the attackers gained access to a private GitHub repository, used a developer's Amazon Web Services (AWS) credentials that were stored in the repository to gain access to the AWS account and, in turn, obtained the personal data of the 57 million affected individuals.

There are a few potential points along this chain of events that spark concern. The first is the fact that a developer's credentials were in the GitHub repository in the first place. If you've been a software developer or have ever been responsible for securing aspects of the development process, you'll know that all sorts of materials wind up being added to source control. Cryptographic keys, SSH keys, certificates and private keys, usernames and passwords, application account credentials, and even root or admin-level passwords sometimes make their way into source control. While that is never a desirable situation, when the repository in question is GitHub -- and therefore potentially accessible to a much larger population than just an internal development community -- the consequences are potentially worse.

The same thing is true of the AWS account that the credentials unlocked. One might question why the developer had direct access to this account in which millions of Uber customer records were centrally located. In many shops, there are strict walls of separation between development and production. While in a cloud model those barriers can sometimes become blurred, it goes without saying that this is exactly the situation for which those checks and balances are usually put in place.

Mitigation strategies

The truth is this type of situation can be very challenging to entirely prevent. Not that there aren't architectural or other solutions that can be configured that can provide value -- there absolutely are -- it's just that, given the low barrier for shadow or under the radar cloud usage, it can be difficult to know exactly how to apply them.

For example, consider that an architectural strategy like microsegmentation, which can help to segment functionality into smaller, more discrete processing units, appears to support a strategy of keeping a smaller footprint of data for subsets of functionality. The truth is that under-the-radar usage -- such as developers using cloud environments as a test bed -- could cause that data to aggregate in the cloud anyway. In other words, shadow cloud usage can undermine even sophisticated architectural strategies like these. Because of this, a useful starting point is instead a combination of discovery, threat-aware processes and cultural change.

On the discovery side, building up a reliable understanding of what services are in use, what they are used for and by whom can help ensure that this usage is given the appropriate scrutiny. For example, understanding that there is a GitHub repository in use by the development team can lead the security team to review that usage; periodically examine it to make sure that it does not contain artifacts that are inappropriate from a security point of view, such as passwords or keys; and enable opportunities for training and awareness about what should or should not be stored there.

Discovery, in this case, can use both automated and manual methods to build out the inventory of services in use -- again, this isn't exactly new advice, and it's not rocket science -- but, when coupled with a workman-like and systematic examination and review of that usage, it can enable security personnel to locate and address issues like this ahead of time.

Likewise, effort put into understanding the threat environment can provide value, as well; this means a systematic process to examine the threats posed to the application in a structured way, such as application threat modeling.

In the case of the Uber data breach, a careful examination of a threat model might have outlined administrative access to AWS as a potential area of strong consideration -- i.e., it could have helped identify AWS account credential compromise as an area of particularly high risk in light of how the application operates. Should that avenue be found to be one of higher risk, questions could be asked about who has the account credentials, where they are stored, and if that storage is appropriately locked down and the responses evaluated.

Lastly, cultural approaches can play a role in supporting the above: training and awareness opportunities around developer best practices, expectations for safeguarding information, safe and secure usage of cloud services, etc. While it's hard to imagine that this type of approach, on its own, will always lead to the perfect outcome, it can certainly help.

At the end of the day, ensuring that everyone in an organization employs cloud in a secure way 100% of the time is a challenging exercise. But ensuring that you keep a handle on what is being used, that you systematically evaluate the threat scenarios around that usage, and that you build a strong culture of security can help introduce opportunities to correct errors that might occur when they are discovered and find -- and ideally fix -- situations that might be more risky than others.

Dig Deeper on Security operations and management