4 criteria to measure cybersecurity goal success
Measuring the success of cybersecurity goals is challenging because they are components of larger goals and often probabilistic rather than deterministic.
Security goals exist to help companies accomplish their organizational goals. After all, you wouldn't use antivirus software if there were no such things as malware attacks that steal sensitive data, and you wouldn't hire security guards to protect a room with nothing in it, wrote Diana Kelley and Ed Moyle in Practical Cybersecurity Architecture.
It is, therefore, critical for architects to understand their company's organizational goals, which then cascade into technology goals and, finally, cybersecurity goals.
In their book, Kelley and Moyle offered the example of a company whose organizational goal is to build a customer base. The associated technology goals would be enabling customer interactions and providing customer-focused tools to employees. These technology goals, in turn, would translate to cybersecurity goals that include authenticating user and employee access, keeping customer data confidential, and ensuring tool reliability and data integrity.
Defining goals is only one of step of the process. Another is measuring their success -- a difficult task, especially in the case of cybersecurity goals. While the success of an organizational goal of "build a customer base" can be measured in numbers, it's far more difficult to measure the success of authenticating access or keeping data confidential.
Kelley and Moyle suggest four key criteria against which to evaluate cybersecurity goals: effectiveness, maturity, efficiency and alignment.
Learn more about these dimensions in the following excerpt from Chapter 2 of Practical Cybersecurity Architecture. Download a PDF of the entire chapter for more information on business goals and mapping, and check out a Q&A with the authors about the difficulty of defining cybersecurity architecture in the first place.
Dimensions of success
"I'm not going to say that technical skills don't matter, but the skill that will enable all other skills to come out is retrospection. I don't like to use the term "post-mortem," because in reality nobody has to die for us to learn important lessons. When I teach people to threat model, I tell them to start out with as much time for retrospectives as they spend in their analysis. If you start out cooking and the first time you chop an onion the pieces all come out all different sizes, you can learn and practice until you get better. But often, this might not be the most important skill -- a more important skill might be being able to tell if the bottom of the pan is burning for example. Sometimes the skills that are most important are the ones that take the least time to learn."
-- Adam Shostack, president of Shostack & Associates
Understanding what the goals are is an important step (arguably the most important step), but it's important to note that it's not the only step in the legwork that we need to do in order to gather the "raw materials" required to understand the architectural universe. Specifically, we also need to understand how the organization measures itself against those goals. With a goal that is concrete and specific, this is straightforward. For example, a high-level financial goal such as increase profitability or provide shareholder value is easily measured financially -- that is, by looking at revenue, expenses, operating costs, and so on. Other goals (for example, social responsibility, providing non-tangible value) can be harder to measure.
Security goals are often particularly difficult to measure against as many of them are probabilistic in nature. For example, a goal such as decreasing the likelihood of something undesirable coming to pass (a breach, for example) is based on the probability that the outcome will occur, rather than something directly measurable in and of itself. This matters because, as we discussed earlier, there are often several different implementation strategies to achieve the same outcome. These strategies will not always be equal in how they do so or how they impact other goals.
As an example of what I mean, consider a software vendor -- that is, a company whose business is to develop and sell application software. They might set a goal that all source code is reviewed for security vulnerabilities as they conclude that doing so, in turn, supports business goals such as competitiveness and long-term profitability. One way they might choose to implement this is by using a source code scanning tool (for example, lint if they're working in C); another strategy might be hiring a team of experienced C developers to manually audit each and every line of code for the product. These two approaches accomplish the same thing (vet source code for errors), but perform very differently with respect to cost, efficacy, time investment, and so on. Understanding how each goal is measured informs what strategies are most advantageous to achieving the outcome we want.
There are multiple different dimensions along which we can evaluate a given approach to implementation. In fact, it's arguable that there are a near-infinite number of dimensions along which any given strategy can be evaluated and that each organization might have a different set. However, as a practical matter, there are four that we will consider here:
- Effectiveness
- Maturity
- Efficiency
- Alignment
We'll explain what we mean by each one in detail in the following subsections. Again, we are trying to understand here how the organization measures itself and less so how we might measure in isolation. Thus, keep in mind, as we move through this, that we want to understand these dimensions generically, but also in terms of how an organization might employ them as a method of self-measurement.
Effectiveness
The first evaluation criteria that we'll look at is how well the implementation strategy performs at doing what it is designed to do. For business goals, this can be straightforward. If the strategy is designed to make money, how well does it do that? How much money does the organization generate? If a strategy is used to reduce development time, how much time does it remove from the process?
Security goals likewise can be looked at, evaluated, and measured through the lens of effectiveness to the same degree that other types of goals can -- that is, how well does the security measure we implement perform at achieving the goal?
With security controls, this dimension is particularly important as controls are not equivalent. In fact, even when security measures are designed with identical outcomes in mind, individual implementations can impact how well they perform. As an example, consider the difference between Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access II (WPA2). Both are designed to do the same thing (more or less) from a very high-level point of view: namely, to provide confidentiality of data transmitted over wireless networks. However, they have vastly different characteristics with regard to their utility and security.
Those who are familiar with the history of WEP will know that there are serious security vulnerabilities in WEP. These issues are serious enough that they allow an attacker to passively monitor the network and break the encryption used in a matter of minutes or hours (Fluher, Scott et al., Weaknesses in the Key Scheduling Algorithm of RC4). By contrast, WPA2 is the current generally accepted optimal protocol for providing robust confidentiality on a wireless network. Therefore, while they both serve the same underlying security goal, they differ vastly in terms of how well they do it. They are not equally effective.
This, in a nutshell, is effectiveness: the efficacy security measures have at satisfying the goal/requirement -- or, their success at delivering what is intended. Any organization that has conducted a security program review that examined what controls they have in place, and provided feedback on those that were not implemented, has likely been measured along this axis.
Maturity
The second dimension to be aware of is the maturity of implementation. This is particularly true when looking at security measures that have a necessary underlying procedural component.
Note that by maturity here, we don't just mean how long something has existed (that is, how old it is chronologically) or its acceptance in the industry (that is, the maturity of a technology). These things can be important, too, in some cases, but instead, we're referring to something else: the reproducibility and reliability of the processes that support the implementation. This is the same sense of maturity that is used by frameworks such as Capability Maturity Model Integration (CMMI), developed by Carnegie Mellon (now stewarded by the CMMI Institute) for understanding software development process maturity (CMMI® for Development, Version 1.3. Software Engineering Institute, Carnegie Mellon University).
Two security measures, both themselves designed to fulfill a particular niche and achieve a particular security goal, can have very different maturity characteristics. For example, consider the respective incident response processes at two hypothetical firms. In the first organization, there is no written process, no case management or other support software, no automation, and no metrics collected about performance. In the second, they have a well-documented and highly automated process where metrics about performance are collected and improvements to the process are made over time based on those metrics.
In both cases, the function is the same: incident response. The two processes might even (on the whole) be equally effective (on average) at servicing the incident response needs of the organization. However, one organization employs an immature process to satisfy these needs: it is ad hoc, relatively unmanaged, and reactive. Using a scale such as the one contained in the CMMI, you might call their process initial (level 1) on the maturity spectrum. The second organization has a much more "mature" process: it's reproducible, consistent, and managed. Depending on the degree of ongoing improvement and optimization, it might fall into the quantitatively managed or optimizing maturity levels (level 4 or 5, respectively, in the CMMI model).
Again, even if the goal is the same -- for example, the intent and function are the same and even the efficacy and performance are the same -- maturity of implementation can vary. There are advantages to using processes that have higher maturity. For example, the more mature a process is, the more likely it is to have consistency in how it is performed each time, the more easily the process can recover from interruptions such as the attrition of key personnel, and the easier it can be to measure and optimize. There are, though, as you might assume, potential budget and time investments that may be required to bring a process from a lower state of maturity to a higher one. As a consequence, the maturity of implementation might be valuable for the organization to target in its processes.
Efficiency
Another dimension that matters for control implementation is the efficiency of operation. Earlier in this chapter, we used the example of two ways to implement application source code analysis: software testing versus manual code review. We alluded to the fact that these two approaches each have different dynamics and characteristics. One of the places where they diverge significantly is in overall cost and efficiency -- both in terms of dollars spent as well as in the time it takes staff to perform the tasks involved.
This can be true of any security implementation. Measures might perform similarly -- or even equivalently -- in terms of effectiveness and/or maturity, but still have very different financial cost or time investment requirements.
To make this clearer, consider spam monitoring as an example of how this can be true. Say that, to maximize staff time and prevent phishing attempts, we want to filter out suspicious emails, spam, or other potentially unsolicited or undesirable emails. One approach to do this might be to employ automated filtering; another approach might be to hire a team of people to read all incoming emails looking for spam. Assuming for a minute that each approach was equally likely to catch inbound spam, obviously, there are advantages to the automated approach. Putting aside the obvious privacy implications of a team of strangers reading your emails, such an approach would also cost significantly more, both in time and dollars.
This is what we mean by efficiency in this context. You might alternatively refer to this as "cost-effectiveness" for organizations such as commercial companies, where staff time and dollars spent are functions of each other. Since this is not always the case (for example, in educational institutions), we thought "efficiency" was a more descriptive word choice.
This is a particularly important metric for the security architect when it comes to implementing security measures. Why? Because any security measure we put in place comes with an opportunity cost. Assuming resources are constrained (that is, that you don't have an infinite budget or unlimited staff), every measure you put in place comes at the cost of something else you didn't do -- that is, what you could have done instead but didn't because you went down the specific implementation path you did.
For example, if you dedicate your entire staff to one security measure (having them manually filter inbound email for spam, for example), there are other security measures you can't implement because those resources are engaged elsewhere. Since staff can't do two things at once, the opportunity cost for the path you chose is whatever those resources would be doing instead if their time wasn't completely occupied. Likewise, if you use your whole budget on one very expensive control, you won't have budget left over for other controls that could also be useful.
Alignment
The last dimension that we'll look at here is what you might call alignment with organizational culture and skills. There are times where what you do -- or how you do it -- will be influenced by other factors. As an example, say that I wanted to watch a movie that isn't available for streaming and is only available on DVD format. If I don't own a DVD player, it really doesn't matter how good the movie is -- or how much I want to see it. Until I either buy a DVD player or the movie is released in a format that I have access to, I won't be able to watch it.
This matters with security measures too. For example, if I don't have access to staff with the right skillset to maintain or operate a security measure, or if, for some other reason (such as culture), the measure would be intolerable or untenable to the organization, it becomes a less compelling choice.
An example that will be familiar to many security practitioners is the forensic examination of compromised systems. As we all know, there is quite a bit of specialized expertise that goes into ensuring courtroom admissibility of evidence gathered during an investigation. We need to preserve the chain of custody, collect evidence in a way that doesn't corrupt the crime scene, be careful to prevent the possibility of writing to source media, and so on. Most organizations, unless they specialize in forensics, don't have the staff to do this themselves. It's not that they couldn't acquire those staff (in fact, some larger organizations do), train them, or keep their skills current. Rather, they choose to seek support from outside specialists in the event that such capability is needed because maintaining that skill base can be expensive relative to the amount of time that they will be directly needed.
In that example, a security measure that specifically requires specialized forensics skills to operate would not be a great fit for an organization that has chosen to outsource that specialization. It's not that either approach is right or wrong; it's just a question of whether it aligns with other choices the organization has made. In the same way that purchasing software that runs solely on OS X is a non-optimal choice for an environment that is Windows-only, this security measure requires something that the organization doesn't have access to.
With this new footing in understanding goals and understanding the dimensions of success, we can embark on a quick journey through the policies, procedures, and standards that will help ease the identification of organization goals.