Measuring Up: How to Architect a Systematic Security Program – Part 2

 In Part 1 of this series , we discussed how organizations can go about selecting a framework for implementation. In order to effectively measure your organization against the selected framework, the organization must take five crucial steps before doing any assessment or analysis.

  • Define the rating scale
  • Define the rating criteria
  • Determine how to address differing control implementations across organizational departments
  • Determine how to measure progress over time
  • Define the Target State

Define the Rating Scale

Defining the rating scale will have implications across the remaining measurement steps and therefore must come first. At this step, we want to nail down two specific items–what scale or terminology we will use to score ourselves and how granular we need to be.

Scale and Terminology

Some organizations might have a rating system in place for other programs, focusing on vulnerability management, risk management, or priorities for defining development tasks. Those organizations might want to utilize an existing paradigm like these in order to create consistency in terminology across the organization. Doing so helps stakeholders more easily understand the relative importance of any findings or recommendations.

Alternatively, an organization might seek an industry standard scale and terminology. Using this can help to express assessment results to C-level executives, boards of directors, and external stakeholders such as auditors or clients performing vendor risk assessments.

Praetorian favors the NIST CSF for establishing a security framework, as discussed in our previous post, but we recognize that its lack of measurement details is a drawback. We have elected to pair it with the industry standard Capability Maturity Model (CMM) for measurement when performing our assessments.  This has broad applicability across clients, but we have also asked that the next version of the NIST CSF include more details on measurement.

A Note on Granularity

The issue of granularity will affect each of the five steps involved in measuring an organization against its chosen framework, but certain scales and terminologies will dictate an associated granularity moving forward. For very small organizations, a binary model of Yes/No or Implemented/Not Implemented might be sufficient. This may also be a sufficient model for very early iterations of the assessment or for self-assessment activities.

An alternative three-level model essentially boils down to Not Implemented, In-Progress, and Implemented (or some variation thereof). This model is similarly suitable for smaller organizations that simply need to get a program started and are not as concerned with detailed year-over-year comparisons.

Larger organizations may want more granularity so they can more easily identify progress in subsequent iterations. The Capability Maturity Model ( CMM ) model that Praetorian uses is a five-level model that provides enough granularity to see progress and regression year-over-year. Praetorian does not recommend any model with more than 5-levels as the required assessment effort needed to gather the necessary data for defining differences between levels at that scale simply is not a cost effective approach.

Define the Rating Criteria

After selecting an appropriate scale, the next step is to create a rubric to define the requirements for each rating level. The goal is to be as quantitative as possible, but organizations can choose from a spectrum of implementation. In general, our criteria for each maturity level boils down the answers to these questions:

  • People:  Who is or would be responsible for this objective?
  • Process: What processes or policies exist to meet this objective and are they documented?
  • Technology:  What technology do we have in place that affects this objective?

For the broadest applicability across organizations, controls, and functions, define criteria at the rating model level with general descriptions and definitions that apply to all situations. Praetorian recommends that organizations include examples for each rating level here to further describe how the scale should be used.

An example of what this might look like follows:

Level 1: Initial – At this level, no organized processes are in place. Processes are ad hoc and informal. Security processes are reactive and not repeatable, measurable, or scalable.

Level 2: Repeatable – At this stage of maturity, some processes become repeatable. A formal program has been initiated to some degree, although discipline is lacking. Some processes have been established, defined, and documented.

  • Ad hoc processes based on tribal knowledge

Level 3: Defined – Here, processes have become formal, standardized, and defined. This helps create consistency across the organization.

  • Defined process may not be applied consistently
  • Process is “opt-in” in nature

Level 4: Managed – At this stage, the organization begins to measure, refine, and adapt their security processes to make them more effective and efficient based on the information they receive from their program.

  • Defined processes are applied broadly and have an enforcement mechanism
  • Process is enforced and cannot be circumvented without approval

Level 5: Optimizing – An organization operating at Level 5 has processes that are automated, documented, and constantly analyzed for optimization. At this stage, cybersecurity is part of the overall culture.

 

As organizations increase in maturity, they most likely will require a more robust rubric. At this level of complexity, organizations may start to define per-control metrics to further increase the objective nature of the assessment. This requires significant effort but pays long-term dividends in preventing quibbling with control owners. The definitions of success are defined ahead of time without much room for additional interpretation, as the Figure shows:

Figure: Example metrics for the PR.AC-3.1 “Remote Access Management” control under a granular, mature security framework. 

Defining Controls

While adopting the  model the Figure represents, an organization can choose to measure themselves only by the subcategories in the NIST CSF. Another method is to define additional “dimensions” that further subdivide the sub-categories , then average those scores up to a subcategory score. Doing this enables more granular scoring of individual subcategories that may be implemented differently across the organization, rather than overall subcategory scoring. These controls can be homebrewed, or based on existing control standards such as the NIST 800-53 or CIS implementation tiers.

Building on the example above, these controls could be defined as seen below:
PR.AC-3 Remote Access is Managed (Subcategory)
PR.AC-3.1 Remote Access logs are stored in (log provider or SIEM name)
PR.AC-3.2 Remote Access authorization follows the principle of least privilege

To score the individual controls, reference the questions above as they relate to the control. For PR.AC-3.1, a Process score could be based on whether a logging standard is in place, and whether the Remote Access tools in place are included in that standard’s scope. Similarly, the technology score could be based on whether alerts are in place when a critical server is  logged into.

Defining controls that overall address the goal of the subcategory may also make it easier to create effective risk statements and perform risk scoring as gaps in implementation are discovered. It can also help centralize lists of technical controls that currently exist and ensure auditing them in the future is much easier.

Defining a Measuring Schema

Once an organization has determined its rating scale, criteria, and controls, it must determine the best way to measure itself. Successful application of any security framework hinges on all parties agreeing to a taxonomy or schema that clearly delineates what constitutes “done” within the context of their work

Accommodating Organizational Proclivities

No two organizations are the same and therefore it is impossible to create an assessment model that applies to all organizations equally. Even within an organization, different departments may perform actions differently. For example, the corporate inventory of user workstations may be performed using a different tool and process than the management of cloud assets and resources. Because of these differences, organizations need to determine how they want to assess and score these differing areas.

Organizations realistically have two options:

  1. Score the areas separately and essentially have multiple controls to comply with across the framework. For example, you might have ID.AM-1:Cloud and ID.AM-1:Workstations and score those separately.
  2. Maintain one score per control and essentially use the lowest score metric across the various implementations as the overall score. In this example, if dedicated people were using an asset management system to thoroughly inventory workstations that might result in a score of 4, while cloud being completely unmanaged might only have a score of 2. The overall score would end up being a 2 to represent that more work is needed in this area.

In either case, we simply want to ensure that the process flags areas that require improvement. While both of these options achieve this goal, the first involves a heavier workload that provides more granularity. The second option is much easier to implement but may require more work to tease out details of why or what improvement is needed.

Progress Over Time

Instituting a security maturity program based on a framework without thinking about how that program will drive progress over time is not useful to anybody. Measurement and recording is key to understanding how the organization is changing over time (hopefully for the better).

Organizations have three primary options to measure their progress over time.

  1. Continuous assessment: An organization implements an assessment schedule that ensures that all framework components are assessed on an ongoing basis. In this scenario, organizations usually break the chosen framework into 12 chunks (for the months of the year) and have focus areas each month. As an example, the NIST CSF has 23 Categories which works out to 2 Categories to assess per month (with one month of one Category). This balances resources with continuous assessment. The limiting factor here is that this model usually requires self-assessment which may produce biased results.
  2. Annual assessment: In traditional programs, a third-party conducts an external assessment annually or biannually to gain additional perspective on the current state of the security program. This approach requires a large effort to perform the entire assessment in a matter of weeks. The benefit is a very accurate understanding of the current state, but this time scale has some downsides. These include inability to capture changes between assessments, and infrequency of vector checks on security improvement initiatives.
  3. Hybrid assessment: By blending continuous and annual assessments, organizations can achieve the best of both worlds. The combination of self- and third-party assessments provides organizations the opportunity to check themselves and then validate their hard work

Defining the Target State

Lastly, organizations must define a desired target state so that security investments do not grow without bound. Not every organization needs to be buttoned up like a national intelligence agency. Some organizations may need to focus more on protective controls than on detective controls. We encourage organizations to define their target state before beginning remediation work, so they can ensure appropriate resource allocation. This also helps to prioritize remediation efforts when there are not clear priorities.

Defining the target state has a lot to do with the granularity preference that the organization has chosen. At the simplest level, the organization could set a single target state across the board. Praetorian does not recommend this approach due to the likelihood of prioritizing non-value-added activities. Alternatively, organizations can define target states at any level that the framework allows. For the NIST CSF, organizations can set targets at the Function, Category, and Sub-Category levels. For the CSCs, organizations could use the Implementation Groups, Controls, or Sub-Controls.

The most important thing to keep in mind for effective resource management is that the target states may be different for each criteria, regardless of how granular an organization’s framework scoring system might be. For instance, if the organization has further defined metrics as discussed in the “Accommodating Organizational Proclivities” section they can use those in target setting, as well. Opting for dimension scoring would allow an organization to set target states on a per-dimension basis, and can be useful in reflecting currently accepted gaps or risks.

Conclusion

Instituting a security maturity measurement program based on an established framework can feel daunting, but deliberate upfront planning and definitions can help to ensure the program is both useful and appropriately scoped for the organization. Failure to take these steps may result in a program that is not measurable over time, requires more resources than that commensurate value it provides, or results in misalignment of invested resources during the remediation process.

 

icon-praetorian-

See Praetorian in Action

Request a 30-day free trial of our Managed Continuous Threat Exposure Management solution.

About the Authors

Connor Slack

Connor Slack is the Lead Risk and Compliance Engineer for Advisory Services. In his time at Praetorian he has led teams on performing CSF assessments for Fortune 500 companies and re-architected our GRC program to focus on offensive security risks. He brings 8 years of experience in assessing risks, creating mitigation plans, implementing controls, and maturing security programs. He has a proven track record of aligning cybersecurity investments with business objectives. Before joining Praetorian he built out the threat and risk management program for a multi-billion dollar corporation. He also has consulted with over 25 other organizations on building their security and compliance programs.

Nadia Atif

Nadia Atif

Nadia is a Practice Lead for Risk and Compliance at Praetorian.

Trevor Steen

Trevor Steen

Trevor is the Practice Director of Assessment and Advisory Services. His team focuses on Red Team Ops, Pen Testing, Incident Response & Threat Hunting.

Catch the Latest

Catch our latest exploits, news, articles, and events.

Ready to Discuss Your Next Continuous Threat Exposure Management Initiative?

Praetorian’s Offense Security Experts are Ready to Answer Your Questions