SSVC using Current Information Sources
Some SSVC decision points can be informed or answered by currently available information feeds or sources. These include
This section provides an overview of some options; we cannot claim it is exhaustive.
Each decision point has a subsection for Gathering Information About
it.
These sections provide suggestions that would also contribute to creating or honing information feeds.
However, if there is a category of information source we have not captured, please create an issue on the SSVC GitHub page explaining it and what decision point it informs.
Exploitation
Various vendors provide paid feeds of vulnerabilities that are currently exploited by attacker groups. Any of these could be used to indicate that active is true for a vulnerability. Although the lists are all different, we expect they are all valid information sources; the difficulty is matching a list's scope and vantage with a compatible scope and vantage of the consumer. We are not aware of a comparative study of the different lists of active exploits; however, we expect they have similar properties to block lists of network touchpoints 1 and malware 2. Namely, each list has a different view and vantage on the problem, which makes them appear to be different, but each list accurately represents its particular vantage at a point in time.
System Exposure
System Exposure could be informed by the various scanning platforms such as Shodan and Shadowserver. A service on a device should be scored as open if such a general purpose Internet scan finds that the service responds. Such scans do not find all open systems, but any system they find should be considered open. Scanning software, such as the open-source tool Nessus, could be used to scan for connectivity inside an organization to catalogue what devices should be scored controlled if, say, the scan finds them on an internal network where devices regularly connect to the Internet.
Adapting other Information Sources
Some information sources that were not designed with SSVC in mind can be adapted to work with it. Three prominent examples are CVSS impact base metrics, CWE, and CPE.
CVSS and Technical Impact
Technical Impact is directly related to the CVSS impact metric group. The interpretation is different for CVSS version 3 than version 4.
Mapping CVSS v4 to Technical Impact
For CVSS v4, the impact metric group can be directly mapped to Technical Impact. Stakeholders can define their own mapping, but the recommended mapping between CVSS v4 metric values and Technical Impact is
Confidentiality (VC) |
Integrity (VI) |
Availability (VA) |
Technical Impact |
---|---|---|---|
High (H) | High (H) | any | Total |
High (H) | Low (L) or None (N) | any | Partial |
Low (L) or None (N) | High (H) | any | Partial |
That is, if the vulnerability leads to a high impact on the confidentiality and integrity of the vulnerable system, then that is equivalent to total technical impact on the system.
The following considerations are accounted for in this recommendation.
- A denial of service condition is modeled as a partial Technical Impact. Therefore, a high availability impact to the vulnerable system should not be mapped to total Technical Impact on its own.
- There may be situations in which a high confidentiality impact is sufficient for total technical impact; for example, disclosure of the root or administrative password for the system leads to total technical control of the system. So this suggested mapping is a useful heuristic, but there may be exceptions, depending on exactly what the CVSS v4 metric value assignment norms are and become for these situations.
- While the Subsequent System impact metric group in CVSS v4 is useful, those concepts are not captured by Technical Impact. Subsequent System impacts are captured, albeit in different framings, by decision points such as Situated Safety Impact, Mission Impact, and Value Density. There is not a direct mapping between the subsequent system impact metric group and these decision points, except in the case of Public Safety Impact and the CVSS v4 environmental metrics for Safety Impact in the subsequent system metric group. In that case, both definitions map back to the same safety impact standard for definitions (IEC 61508) and so are easily mapped to each other.
CVSS v3 and Technical Impact
For CVSS v3, the impact metric group cannot be directly mapped to Technical Impact because of the Scope metric. Technical Impact is only about adversary control of the vulnerable component. If the CVSS version 3 value of “Scope” is “Unchanged,” then the recommendation is the same as that for CVSS v4, above, as the impact metric group is information exclusively about the vulnerable system. If the CVSS version 3 value of “Scope” is “Changed,” then the impact metrics may be about either the vulnerable system or the subsequent systems, based on whichever makes the final score higher. Since Technical Impact is based only on the vulnerable system impacts, if "Scope" is "Changed" then the ambiguity between vulnerable and subsequent system impacts is not documented in the vector string. This ambiguity makes it impossible to cleanly map the Technical Impact value in this case.
Mapping CVSS v3 to Technical Impact
Summarizing the discussion above, the mapping between CVSS v3 and Technical Impact is
CVSS Scope | Confidentiality (C) |
Integrity (I) |
Availability (A) |
Technical Impact |
---|---|---|---|---|
Unchanged | High (H) | High (H) | any | Total |
Unchanged | High (H) | Low (L) or None (N) | any | Partial |
Unchanged | Low (L) or None (N) | High (H) | any | Partial |
Changed | any | any | any | (ambiguous) |
CWE and Exploitation
As mentioned in the discussion of Exploitation, CWE could be used to inform one of the conditions that satisfy proof of concept. For some classes of vulnerabilities, the proof of concept is well known because the method of exploitation is already part of open-source tools. For example, on-path attacker scenarios for intercepting TLS certificates. These scenarios are a cluster of related vulnerabilities. Since CWE classifies clusters of related vulnerabilities, the community could likely curate a list of CWE-IDs for which this condition of well known exploit technique is satisfied. Once that list were curated, it could be used to automatically populate a CVE-ID as proof of concept if the CWE-ID of which it is an instance is on the list. Such a check could not be exhaustive, since there are other conditions that satisfy proof of concept. If paired with automatic searches for exploit code in public repositories, these checks would cover many scenarios. If paired with active exploitation feeds discussed above, then the value of Exploitation could be determined almost entirely from available information without direct analyst involvement at each organization.
CPE and Safety Impact
CPE could possibly be curated into a list of representative Public Safety Impact values for each platform or product. The Situated Safety Impact would be too specific for a classification as broad as CPE. But it might work for Public Safety Impact, since it is concerned with a more general assessment of usual use of a component. Creating a mapping between CPE and Public Safety Impact could be a community effort to associate a value with each CPE entry, or an organization might label a fragment of the CPE data with Public Safety Impact based on the platforms that the supplier needs information about most often.
Potential Future Information Feeds
So far, we have identified information sources that can support scalable decision making for most decision points. Some sources, such as CWE or existing asset management solutions, would require a little bit of connective glue to support SSVC, but not too much.
Automatable and Value Density
The SSVC decision point that we have not identified an information source for is Utility. Utility is composed of Automatable and Value Density, so the question is what sort of feed could support each of those decision points.
A feed is plausible for both of these decision points. The values for Automatable and Value Density are both about the relationship between a vulnerability, the attacker community, and the aggregate state of systems connected to the Internet. While that is a broad analysis frame, it means that any community that shares a similar set of adversaries and a similar region of the Internet can share the same response to both decision points. An organization in the People's Republic of China may have a different view than an organization in the United States, but most organizations within each region should should have close enough to the same view to share values for Automatable and Value Density. These factors suggest a market for an information feed about these decision points is a viable possibility.
CVSS v4, Automatable, and Value Density
It is not coincidental that the CVSS v4 supplemental metrics include Automatable (AU) and Value Density (V). The SSVC team collaborated in the development of these metrics with the FIRST CVSS Special Interest Group.
At this point, it is not clear that an algorithm or search process could be designed to automate scoring Automatable and Value Density. It would be a complex natural language processing task. Perhaps a machine learning system could be designed to suggest values. But more likely, if there is a market for this information, a few analysts could be paid to score vulnerabilities on these values for the community. Supporting such analysts with further automation could proceed by small incremental improvements. For example, perhaps information about whether the Reconnaissance step in the kill chain is Automatable or not could be automatically gathered from Internet scanning firms such as Shodan or Shadowserver. This wouldn't make a determination for an analyst, but would be a step towards automatic assessment of the decision point.
-
Leigh B Metcalf and Jonathan M Spring. Blacklist ecosystem analysis: spanning Jan 2012 to Jun 2014. In Workshop on Information Sharing and Collaborative Security, 13–22. Denver, 2015. ACM. ↩
-
Marc Kührer, Christian Rossow, and Thorsten Holz. Paint it black: evaluating the effectiveness of malware blacklists. In Recent Advances in Intrusion Detection, number 8688 in LNCS, 1–21. Gothenburg, Sweden, 2014. Springer. ↩