Skip to content

Related Vulnerability Management Systems

There are several other bodies of work that are used in practice to assist vulnerability managers in making decisions. Three relevant systems are CVSS, EPSS, and Tenable's Vulnerability Priority Rating (VPR). There are other systems derived from CVSS, such as RVSS for robots 1 and MITRE's Rubric for Applying CVSS to Medical Devices. There are also other nascent efforts to automate aspects of the decision making process, such as vPrioritizer. This section discusses the relationship between these various systems and SSVC.

CVSS

What about CVSS v4?

Since this documentation was written, CVSS v4 has been released. While we plan to address CVSS v4 in a future update to the SSVC documentation, we are retaining our CVSS v3.1 content because it remains the most widely used version of CVSS.

CVSS version 3.1 has three metric groups: base, environmental, and temporal. The metrics in the base group are all required, and are the only required metrics. In connection with this design, CVSS base scores and base metrics are far and away the most commonly used and communicated. A CVSS base score has two parts: the exploitability metrics and the impact metrics. Each of these are echoed or reflected in aspects of SSVC, though the breadth of topics considered by SSVC is wider than CVSS version 3.1.

How CVSS is used matters. Using just the base scores, which are “the intrinsic characteristics of a vulnerability that are constant over time and across user environments,” as a stand-alone prioritization method is not recommended 2. Two examples of this include the U.S. government 345 and the global payment card industry 6 where both have defined such misuse as expected practice in their vulnerability management requirements. CVSS scores have a complex relationship with patch deployment in situations where it is not mandated, at least in an ICS context 7.

CVSS has struggled to adapt to other stakeholder contexts. Various stakeholder groups have expressed dissatisfaction by making new versions of CVSS, such as medical devices 8, robotics 1, and industrial systems 9. In these three examples, the modifications tend to add complexity to CVSS by adding metrics. Product vendors have varying degrees of adaptation of CVSS for development prioritization, including but not limited to Red Hat, Microsoft, and Cisco. The vendors codify CVSS’s recommended qualitative severity rankings in different ways, and Red Hat and Microsoft make the user interaction base metric more important.

Exploitability metrics (Base metric group)

The four metrics in this group are Attack Vector, Attack Complexity, Privileges Required, and User Interaction. This considerations may likely be involved in the Automatability decision point. If Attack Vector = Network and Privileges Required = None, then the delivery phase of the kill chain is likely to be automatable. Attack Vector may also be correlated with the Exposure decision point. Attack Complexity may influence how long it may take an adversary to craft an automated exploit, but Automatability only asks whether exploitation can be automated, not how difficult it was. However, Attack Complexity may influence the weaponization phase of the kill chain. User Interaction does not cleanly map to a decision point. In general, SSVC does not care whether a human is involved in exploitation of the vulnerability or not. Some human interaction is for all intents and purposes automatable by attackers: most people click on links in emails as part of their normal processes. In most such situations, user interaction does not present a firm barrier to automatability; it presents a stochastic barrier. Automatability is written to just consider firm barriers to automation.

Automatability includes considerations that are not included in the exploitability metrics. Most notably the concept of vulnerability chaining is addressed in Automatability but not addressed anywhere in CVSS. Automatability is also outcomes focused. A vulnerability is evaluated based on an observable outcome of whether the first four steps of the kill chain can be automated for it. A proof of automation in a relevant environment is an objective evaluation of the score in a way that cannot be provided for some CVSS elements, such as Attack Complexity.

Impact metrics (Base metric group)

The metrics in this group are Confidentiality, Integrity, and Availability. There is also a loosely associated Scope metric. The CIA impact metrics are directly handled by Technical Impact.

Scope is a difficult CVSS metric to categorize. The specification describes it as “whether a vulnerability in one vulnerable component impacts resources in components beyond its security scope” 2. This is a fuzzy concept. SSVC better describes this concept by breaking it down into component parts. The impact of exploitation of the vulnerable component on other components is covered under Mission Impact, public and situated Well-being Impact, and the stakeholder-specific nature where SSVC is tailored to stakeholder concerns. CVSS addresses some definitions of the scope of CVSS as a whole under the Scope metric definition. In SSVC, these definitions are in the Scope section.

Temporal metric groups

The temporal metric group primarily contains the Exploit Code Maturity metric. This metric expresses a concept similar to Exploitation. The main difference is that Exploitation is not optional in SSVC and that SSVC accounts for the observation that most vulnerabilities with CVE-IDs do not have public exploit code 10 and are not actively exploited 1112.

Environmental metric group

The environmental metric group allows a consumer of a CVSS base score to change it based on their environment. CVSS needs this functionality because the organizations that produce CVSS scores tend to be what SSVC calls suppliers and consumers of CVSS scores are what SSVC calls deployers. These two stakeholder groups have a variety of natural differences, which is why SSVC treats them separately. SSVC does not have such customization as a bolt-on optional metric group because SSVC is stakeholder-specific by design.

EPSS

The Exploit Prediction Scoring System (EPSS) is “a data-driven effort for estimating the likelihood (probability) that a software vulnerability will be exploited in the wild.” EPSS is currently based on a machine-learning classifier and proprietary data from Fortiguard, Alienvault OTX, the Shadowserver Foundation and GreyNoise. While the group has made an effort to make the ML classifier transparent, ML classifiers are not able to provide an intelligible, human-accessible explanation for their behavior 13. The use of proprietary training data makes the system less transparent.

EPSS could be used to inform the Exploitation decision point. Currently, Exploitation focuses on the observable state of the world at the time of the SSVC decision. EPSS is about predicting if a transition will occur from the SSVC state of none to active. A sufficiently high EPSS score could therefore be used as an additional criterion for scoring a vulnerability as active even when there is no observed active exploitation.

VPR

VPR is a prioritization product sold by Tenable. VPR determines the severity level of a vulnerability based on “technical impact and threat.” Just as Technical Impact in SSVC, technical impact in VPR tracks the CVSS version 3 impact metrics in the base metric group. The VPR threat component is about recent and future threat activity; it is comparable to Exploitation if EPSS were added to Exploitation.

VPR is therefore essentially a subset of SSVC. VPR is stylistically methodologically quite different from SSVC. VPR is based on machine learning models and proprietary data, so the results are totally opaque. There is no ability to coherently and transparently customize the VPR system. Such customization is a central feature of SSVC, as described in Tree Construction and Customization Guidance.

CVSS spin offs

Attempts to tailor CVSS to specific stakeholder groups, such as robotics or medical devices, are are perhaps the biggest single reason we created SSVC. CVSS is one-size-fits-all by design. These customization efforts struggle with adapting CVSS because it was not designed to be adaptable to different stakeholder considerations. The SSVC section Tree Construction and Customization Guidance explains how stakeholders or stakeholder communities can adapt SSVC in a reliable way that still promotes repeatability and communication.

vPrioritizer

vPrioritizer is an open-source project that attempts to integrate asset management and vulnerablity prioritization. The software is mostly the asset management aspects. It currently includes CVSS base scores as the de facto vulnerability prioritization method; however, fundamentally the system is agnostic to prioritization method. vPrioritizer is an example of a product that is closely associated with vulnerability prioritization, but is not directly about the prioritization method. In that sense, it is compatible with any of methods mentioned above or SSVC. However, SSVC would be better suited to address vPrioritizer's broad spectrum asset management data. For example, vPrioritizer aims to collect data points on topics such as asset significance. Asset significance could be expressed through the SSVC decision points of Mission Impact and situated Well-being Impact, but it does not have a ready expression in CVSS, EPSS, or VPR.


  1. Víctor Mayoral Vilches, Endika Gil-Uriarte, Irati Zamalloa Ugarte, Gorka Olalde Mendia, Rodrigo Izquierdo Pisón, Laura Alzola Kirschgens, Asier Bilbao Calvo, Alejandro Hernández Cordero, Lucas Apa, and César Cerrudo. Towards an open standard for assessing the severity of robot security vulnerabilities, the robot vulnerability scoring system (RVSS). arXiv preprint arXiv:1807.10357, 2018. 

  2. CVSS SIG. Common vulnerability scoring system. Technical Report version 3.1 r1, Forum of Incident Response and Security Teams, Cary, NC, USA, 2019. URL: https://www.first.org/cvss/v3.1/specification-document

  3. Karen Scarfone, Murugiah Souppaya, Amanda Cody, and Angela Orebaugh. Technical guide to information security testing and assessment. Technical Report SP 800-115, US Dept of Commerce, National Institute of Standards and Technology, Gaithersburg, MD, 2008. 

  4. Muragiah Souppaya and Karen Scarfone. Guide to enterprise patch management technologies. Technical Report SP 800-40r3, US Dept of Commerce, National Institute of Standards and Technology, Gaithersburg, MD, 2013. 

  5. Cybersecurity and Infrastructure Security Agency. Critical vulnerability mitigation. 2015. Superseded by BOD19-02. URL: https://cyber.dhs.gov/bod/15-01/ (visited on 2020-08-21). 

  6. PCI Security Standards Council. Payment card industry (pci) data security standard: approved scanning vendors. Technical Report ver 3.0, PCI Security Standards Council, Wakefield, MA, USA, 2017. URL: https://www.pcisecuritystandards.org/documents/ASV_Program_Guide_v3.0.pdf

  7. Brandon Wang, Xiaoye Li, Leandro P de Aguiar, Daniel S Menasche, and Zubair Shafiq. Characterizing and modeling patching practices of industrial control systems. Measurement and Analysis of Computing Systems, 1(1):1–23, 2017. 

  8. Melissa P Chase and Steven M Cristey Coley. Rubric for applying cvss to medical devices. Technical Report 18-2208, MITRE Corporation, McLean, VA, USA, 2019. URL: https://www.mitre.org/publications/technical-papers/rubric-for-applying-cvss-to-medical-devices

  9. Santiago Figueroa-Lorenzo, Javier Añorga, and Saioa Arrizabalaga. A survey of IIoT protocols: a measure of vulnerability risk analysis based on cvss. ACM Comput. Surv., 2020. URL: https://doi.org/10.1145/3381038

  10. Allen D Householder, Jeff Chrabaszcz, Trent Novelly, David Warren, and Jonathan M Spring. Historical analysis of exploit availability timelines. In Workshop on Cyber Security Experimentation and Test. Virtual conference, 2020. USENIX. 

  11. Dan Guido. The exploit intelligence project. Technical Report, iSEC Partners, 2011. URL: http://www.trailofbits.com/resources/exploit_intelligence_project_2_slides.pdf

  12. Jay Jacobs, Sasha Romanosky, Benjamin Edwards, Idris Adjerid, and Michael Roytman. Exploit prediction scoring system (epss). Digital Threats, Jul 2021. URL: https://doi.org/10.1145/3436242, doi:10.1145/3436242

  13. Jonathan M. Spring, Joshua Fallon, April Galyardt, Angela Horneman, Leigh Metcalf, and Ed Stoner. Machine learning in cybersecurity: a guide. Technical Report CMU/SEI-2019-TR-005, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, 2019. URL: http://resources.sei.cmu.edu/library/asset-view.cfm?AssetID=633583