Skip to content

Acuity Ramp

Why Acuity? Isn't this a Maturity Model?

The acuity ramp concept is similar to the idea of a maturity model, but the term maturity carries a sort of moral bias in the sense that it has an implied "good" direction from "immature" to "mature".

Acuity has a more neutral connotation, and represents the ability to perceive the world in more detail. In our usage, Acuity as a dimension is more akin to the idea of resolution in the imagery/photography sense.

Given the choice between a lower-resolution and a higher-resolution decision point, stakeholders should choose the one that is most appropriate for both their decision and context. It is not inherently better to use a higher-resolution decision point, and it is not inherently worse to use a lower-resolution decision point.

An SSVC acuity ramp is a concept that describes a series of decision functions that are increasingly more detailed and complex while addressing the same decision. The idea is that a decision maker can start with a simple decision model and then, as their needs, resources, or abilities change, they can gather and analyze more or different data to understand their environment with more acuity.

Acuity Tradeoffs

In Cybersecurity Threat and Vulnerability analysis, as with most decision-making processes, decision makers must balance trade-offs between the volume, quality, or detail of the information they use and the cost of gathering and analyzing that information. There are many good reasons that decision makers might choose to use a lower resolution indicator that is readily available over a higher resolution indicator that comes at a high cost in terms of time, money, or effort.

One way to think about the tradeoffs in acuity is to consider the cost or difficulty of gathering and analyzing data. Some vulnerability information is readily available for free as a public resource. Other information is available for purchase, for example as a subscription to a threat intelligence feed. Still other information is only available if you set up a system to collect and manage it yourself, such as an internal asset management system. For direct cost tradeoffs, one might conduct a cost-benefit analysis of whether the additional acuity provides value more than its cost. Sometimes, tradeoffs are not directly cost-based.

The quality and readiness for use of the information can also vary. Structured, low resolution public data might be easier to incorporate into a decision model than unstructured data that requires a lot of manual analysis. At the CERT/CC, we have observed otherwise high quality threat intelligence provided as PDF files with threat indicators embedded as screenshots of text, which would be difficult to extract and use in a decision model.

Another tradeoff is that sometimes one decision point can serve as a close-enough proxy for another decision point that is more costly or difficult to acquire. For example, in a given deployment context, Value Density might be more readily discerned than Mission Impact for some stakeholders because it's easier to count how many of something there are than to estimate the impact of a loss of specific instances of the thing. Alternately, information about Value Density might be available from another source, such as a CVSS v4 scoring provider, whereas Mission Impact might require a more detailed understanding of the stakeholder's mission and environment. An organization might start with Value Density as a proxy for Mission Impact and then, as they develop a better understanding of their environment, they could replace Value Density with Mission Impact in their decision model.

An Acuity Ramp in Action

The acuity ramp idea is a way to show how a stakeholder could "grow into" their desired decision function as their data collection and analysis capabilities increase. We demonstrate this with the following example.

An Acuity Ramp for a Growing System Deployer Organization

A system deployer organization with few initial resources might start with a simple decision model that only includes a custom IN_KEV decision point. The IN_KEV decision point would be a simple binary indicator of whether a vulnerability has been added to CISA's Known Exploited Vulnerabilities (KEV) catalog. Because this information is free, publicly available, and because it is a simple binary indicator, it is easy to gather and analyze even for a very small organization.

The following table shows how the organization might expand their decision model as they grow in capability.

Acutiy Level Tree
1 [IN_KEV]
2 [EXPLOITATION_1]
3 [EXPLOITATION_1, SYSTEM_EXPOSURE_1_0_1]
4 [EXPLOITATION_1, SYSTEM_EXPOSURE_1_0_1, AUTOMATABLE_2]
5 [EXPLOITATION_1, SYSTEM_EXPOSURE_1_0_1, AUTOMATABLE_2, MISSION_IMPACT_2, SAFETY_IMPACT_1]

Acuity Levels are Stakeholder-Specific

This example is demonstrating the concept of acuity levels in SSVC adoption. We are specifically not saying that there are 5 levels of acuity in SSVC; in principle this concept could be applied to any number of levels (including just one). In practice, the number of levels necessary would be a stakeholder-specific implementation choice, so there is no inherent meaning to the "Acuity Levels" shown here outside the context of this example.

The remainder of this example shows one path the organization might take to grow their decision model according to the table above.

Improved Exploit Awareness (Acuity Level 2)

As the organization becomes more capable of gathering and analyzing data, they might start collecting their own data on exploitation, allowing them to move to a more detailed decision model that replaces the binary IN_KEV decision point with the trinary EXPLOITATION_1 decision point. For example, they might incorporate data from the Exploit Database or the Exploit Prediction Scoring System (EPSS) into their decision model.

Exploitation v1.0.0

The present state of exploitation of the vulnerability.

Value Definition
None There is no evidence of active exploitation and no public proof of concept (PoC) of how to exploit the vulnerability.
PoC One of the following cases is true: (1) private evidence of exploitation is attested but not shared; (2) widespread hearsay attests to exploitation; (3) typical public PoC in places such as Metasploit or ExploitDB; or (4) the vulnerability has a well-known method of exploitation.
Active Shared, observable, reliable evidence that the exploit is being used in the wild by real attackers; there is credible public reporting.
{
  "namespace": "ssvc",
  "version": "1.0.0",
  "schemaVersion": "1-0-1",
  "key": "E",
  "name": "Exploitation",
  "description": "The present state of exploitation of the vulnerability.",
  "values": [
    {
      "key": "N",
      "name": "None",
      "description": "There is no evidence of active exploitation and no public proof of concept (PoC) of how to exploit the vulnerability."
    },
    {
      "key": "P",
      "name": "PoC",
      "description": "One of the following cases is true: (1) private evidence of exploitation is attested but not shared; (2) widespread hearsay attests to exploitation; (3) typical public PoC in places such as Metasploit or ExploitDB; or (4) the vulnerability has a well-known method of exploitation."
    },
    {
      "key": "A",
      "name": "Active",
      "description": "Shared, observable, reliable evidence that the exploit is being used in the wild by real attackers; there is credible public reporting."
    }
  ]
}

Improved Asset Management (Acuity Level 3)

As they continue to develop their internal asset management capabilities, they might find they have enough asset data to reflect the degree to which a system is exposed to the internet, allowing them to incorporate the SYSTEM_EXPOSURE_1_0_1 decision point into their decision model.

System Exposure v1.0.1

The Accessible Attack Surface of the Affected System or Service

Value Definition
Small Local service or program; highly controlled network
Controlled Networked service with some access restrictions or mitigations already in place (whether locally or on the network). A successful mitigation must reliably interrupt the adversary’s attack, which requires the attack is detectable both reliably and quickly enough to respond. Controlled covers the situation in which a vulnerability can be exploited through chaining it with other vulnerabilities. The assumption is that the number of steps in the attack path is relatively low; if the path is long enough that it is implausible for an adversary to reliably execute it, then exposure should be small.
Open Internet or another widely accessible network where access cannot plausibly be restricted or controlled (e.g., DNS servers, web servers, VOIP servers, email servers)
{
  "namespace": "ssvc",
  "version": "1.0.1",
  "schemaVersion": "1-0-1",
  "key": "EXP",
  "name": "System Exposure",
  "description": "The Accessible Attack Surface of the Affected System or Service",
  "values": [
    {
      "key": "S",
      "name": "Small",
      "description": "Local service or program; highly controlled network"
    },
    {
      "key": "C",
      "name": "Controlled",
      "description": "Networked service with some access restrictions or mitigations already in place (whether locally or on the network). A successful mitigation must reliably interrupt the adversary\u2019s attack, which requires the attack is detectable both reliably and quickly enough to respond. Controlled covers the situation in which a vulnerability can be exploited through chaining it with other vulnerabilities. The assumption is that the number of steps in the attack path is relatively low; if the path is long enough that it is implausible for an adversary to reliably execute it, then exposure should be small."
    },
    {
      "key": "O",
      "name": "Open",
      "description": "Internet or another widely accessible network where access cannot plausibly be restricted or controlled (e.g., DNS servers, web servers, VOIP servers, email servers)"
    }
  ]
}

Improved Threat and Vulnerability Analysis (Acuity Level 4)

Over time, the organization's threat and vulnerability analysis capabilities might reach a point where they can begin to collect data on the degree to which a vulnerability is automatable, allowing them to incorporate the AUTOMATABLE_1 decision point into their decision model. This decision point might be informed by data from the National Vulnerability Database (NVD) or by translating CVSS v3 or v4 scores into a value for this decision point.

Automatable v2.0.0

Can an attacker reliably automate creating exploitation events for this vulnerability?

Value Definition
No Attackers cannot reliably automate steps 1-4 of the kill chain for this vulnerability. These steps are (1) reconnaissance, (2) weaponization, (3) delivery, and (4) exploitation.
Yes Attackers can reliably automate steps 1-4 of the kill chain.
{
  "namespace": "ssvc",
  "version": "2.0.0",
  "schemaVersion": "1-0-1",
  "key": "A",
  "name": "Automatable",
  "description": "Can an attacker reliably automate creating exploitation events for this vulnerability?",
  "values": [
    {
      "key": "N",
      "name": "No",
      "description": "Attackers cannot reliably automate steps 1-4 of the kill chain for this vulnerability. These steps are (1) reconnaissance, (2) weaponization, (3) delivery, and (4) exploitation."
    },
    {
      "key": "Y",
      "name": "Yes",
      "description": "Attackers can reliably automate steps 1-4 of the kill chain."
    }
  ]
}

Improved Mission and Safety Impact Understanding (Acuity Level 5)

Now that the deployer organization has been at this for a while, they might have a better understanding of the degree to which a vulnerability impacts both their mission and public safety, allowing them to incorporate the MISSION_IMPACT_2 and SAFETY_IMPACT_1 decision points into their decision model.

Mission Impact v2.0.0

Impact on Mission Essential Functions of the Organization

Value Definition
Degraded Little to no impact up to degradation of non-essential functions; chronic degradation would eventually harm essential functions
MEF Support Crippled Activities that directly support essential functions are crippled; essential functions continue for a time
MEF Failure Any one mission essential function fails for period of time longer than acceptable; overall mission of the organization degraded but can still be accomplished for a time
Mission Failure Multiple or all mission essential functions fail; ability to recover those functions degraded; organization’s ability to deliver its overall mission fails
{
  "namespace": "ssvc",
  "version": "2.0.0",
  "schemaVersion": "1-0-1",
  "key": "MI",
  "name": "Mission Impact",
  "description": "Impact on Mission Essential Functions of the Organization",
  "values": [
    {
      "key": "D",
      "name": "Degraded",
      "description": "Little to no impact up to degradation of non-essential functions; chronic degradation would eventually harm essential functions"
    },
    {
      "key": "MSC",
      "name": "MEF Support Crippled",
      "description": "Activities that directly support essential functions are crippled; essential functions continue for a time"
    },
    {
      "key": "MEF",
      "name": "MEF Failure",
      "description": "Any one mission essential function fails for period of time longer than acceptable; overall mission of the organization degraded but can still be accomplished for a time"
    },
    {
      "key": "MF",
      "name": "Mission Failure",
      "description": "Multiple or all mission essential functions fail; ability to recover those functions degraded; organization\u2019s ability to deliver its overall mission fails"
    }
  ]
}

Safety Impact v1.0.0

The safety impact of the vulnerability.

Value Definition
None The effect is below the threshold for all aspects described in Minor.
Minor Any one or more of these conditions hold. Physical harm: Physical discomfort for users (not operators) of the system. Operator resiliency: Requires action by system operator to maintain safe system state as a result of exploitation of the vulnerability where operator actions would be well within expected operator abilities; OR causes a minor occupational safety hazard. System resiliency: Small reduction in built-in system safety margins; OR small reduction in system functional capabilities that support safe operation. Environment: Minor externalities (property damage, environmental damage, etc.) imposed on other parties. Financial Financial losses, which are not readily absorbable, to multiple persons. Psychological: Emotional or psychological harm, sufficient to be cause for counselling or therapy, to multiple persons.
Major Any one or more of these conditions hold. Physical harm: Physical distress and injuries for users (not operators) of the system. Operator resiliency: Requires action by system operator to maintain safe system state as a result of exploitation of the vulnerability where operator actions would be within their capabilities but the actions require their full attention and effort; OR significant distraction or discomfort to operators; OR causes significant occupational safety hazard. System resiliency: System safety margin effectively eliminated but no actual harm; OR failure of system functional capabilities that support safe operation. Environment: Major externalities (property damage, environmental damage, etc.) imposed on other parties. Financial: Financial losses that likely lead to bankruptcy of multiple persons. Psychological: Widespread emotional or psychological harm, sufficient to be cause for counselling or therapy, to populations of people.
Hazardous Any one or more of these conditions hold. Physical harm: Serious or fatal injuries, where fatalities are plausibly preventable via emergency services or other measures. Operator resiliency: Actions that would keep the system in a safe state are beyond system operator capabilities, resulting in adverse conditions; OR great physical distress to system operators such that they cannot be expected to operate the system properly. System resiliency: Parts of the cyber-physical system break; system’s ability to recover lost functionality remains intact. Environment: Serious externalities (threat to life as well as property, widespread environmental damage, measurable public health risks, etc.) imposed on other parties. Financial: Socio-technical system (elections, financial grid, etc.) of which the affected component is a part is actively destabilized and enters unsafe state. Psychological: N/A.
Catastrophic Any one or more of these conditions hold. Physical harm: Multiple immediate fatalities (Emergency response probably cannot save the victims.) Operator resiliency: Operator incapacitated (includes fatality or otherwise incapacitated). System resiliency: Total loss of whole cyber-physical system, of which the software is a part. Environment: Extreme externalities (immediate public health threat, environmental damage leading to small ecosystem collapse, etc.) imposed on other parties. Financial: Social systems (elections, financial grid, etc.) supported by the software collapse. Psychological: N/A.
{
  "namespace": "ssvc",
  "version": "1.0.0",
  "schemaVersion": "1-0-1",
  "key": "SI",
  "name": "Safety Impact",
  "description": "The safety impact of the vulnerability.",
  "values": [
    {
      "key": "N",
      "name": "None",
      "description": "The effect is below the threshold for all aspects described in Minor."
    },
    {
      "key": "M",
      "name": "Minor",
      "description": "Any one or more of these conditions hold. Physical harm: Physical discomfort for users (not operators) of the system. Operator resiliency: Requires action by system operator to maintain safe system state as a result of exploitation of the vulnerability where operator actions would be well within expected operator abilities; OR causes a minor occupational safety hazard. System resiliency: Small reduction in built-in system safety margins; OR small reduction in system functional capabilities that support safe operation. Environment: Minor externalities (property damage, environmental damage, etc.) imposed on other parties. Financial Financial losses, which are not readily absorbable, to multiple persons. Psychological: Emotional or psychological harm, sufficient to be cause for counselling or therapy, to multiple persons."
    },
    {
      "key": "J",
      "name": "Major",
      "description": "Any one or more of these conditions hold. Physical harm: Physical distress and injuries for users (not operators) of the system. Operator resiliency: Requires action by system operator to maintain safe system state as a result of exploitation of the vulnerability where operator actions would be within their capabilities but the actions require their full attention and effort; OR significant distraction or discomfort to operators; OR causes significant occupational safety hazard. System resiliency: System safety margin effectively eliminated but no actual harm; OR failure of system functional capabilities that support safe operation. Environment: Major externalities (property damage, environmental damage, etc.) imposed on other parties. Financial: Financial losses that likely lead to bankruptcy of multiple persons. Psychological: Widespread emotional or psychological harm, sufficient to be cause for counselling or therapy, to populations of people."
    },
    {
      "key": "H",
      "name": "Hazardous",
      "description": "Any one or more of these conditions hold. Physical harm: Serious or fatal injuries, where fatalities are plausibly preventable via emergency services or other measures. Operator resiliency: Actions that would keep the system in a safe state are beyond system operator capabilities, resulting in adverse conditions; OR great physical distress to system operators such that they cannot be expected to operate the system properly. System resiliency: Parts of the cyber-physical system break; system\u2019s ability to recover lost functionality remains intact. Environment: Serious externalities (threat to life as well as property, widespread environmental damage, measurable public health risks, etc.) imposed on other parties. Financial: Socio-technical system (elections, financial grid, etc.) of which the affected component is a part is actively destabilized and enters unsafe state. Psychological: N/A."
    },
    {
      "key": "C",
      "name": "Catastrophic",
      "description": "Any one or more of these conditions hold. Physical harm: Multiple immediate fatalities (Emergency response probably cannot save the victims.) Operator resiliency: Operator incapacitated (includes fatality or otherwise incapacitated). System resiliency: Total loss of whole cyber-physical system, of which the software is a part. Environment: Extreme externalities (immediate public health threat, environmental damage leading to small ecosystem collapse, etc.) imposed on other parties. Financial: Social systems (elections, financial grid, etc.) supported by the software collapse. Psychological: N/A."
    }
  ]
}

In this way, the organization can grow into a more detailed decision model as their understanding and capabilities improve.

Conclusion

The acuity ramp concept is a way to show how a stakeholder could "grow into" their desired decision function as their data collection and analysis capabilities improve. It is a way to show how a decision model can be adapted to the context of the decision maker, and how the decision maker can make trade-offs between the cost of gathering information and the quality of the decision they are able to make.

The example above is just a single illustration of the acuity ramp concept. There are many other ways that an organization might evolve their decision model from a simple starting point toward a more detailed decision model for any particular decision. Substituting one decision point for another, adding decision points over time, or even customizing decision points to better fit the organization's specific context are all ways that an organization might grow from a simple decision model to a more robust one.