Combining EPSS with other Exploitation-Related Decision Points
SSVC users might want to combine exploitation-related information from multiple sources into a single decision point for use downstream in a decision table such as the SSVC Deployer Decision Model.
What's in this How-To?
This How-To explores how to combine information from multiple sources via SSVC Decision Points and Decision Tables to create a more nuanced view of exploitation risk.
One such source is the Exploit Prediction Scoring System (EPSS) probability score.
What is the EPSS Probability Score?
The EPSS probability score is a number between 0 and 1 that indicates the likelihood of a vulnerability being exploited in the wild within the next 30 days.
Other Exploitation-Related Information Sources
However, EPSS is not the only source of exploitation-related information. The CISA Known Exploited Vulnerabilities (KEV) catalog is another important source. Additional exploitation-related information can be found in the CVSS Exploit Maturity vector element.
We have implemented SSVC Decision Points to reflect both CISA KEV and CVSS Exploit Maturity:
In KEV (cisa:KEV:1.0.0)
Denotes whether a vulnerability is in the CISA Known Exploited Vulnerabilities (KEV) list.
Value | Definition |
---|---|
No (N) | Vulnerability is not listed in KEV. |
Yes (Y) | Vulnerability is listed in KEV. |
In KEV (cisa:KEV:1.0.0) JSON Example
{
"namespace": "cisa",
"key": "KEV",
"version": "1.0.0",
"name": "In KEV",
"definition": "Denotes whether a vulnerability is in the CISA Known Exploited Vulnerabilities (KEV) list.",
"schemaVersion": "2.0.0",
"values": [
{
"key": "N",
"name": "No",
"definition": "Vulnerability is not listed in KEV."
},
{
"key": "Y",
"name": "Yes",
"definition": "Vulnerability is listed in KEV."
}
]
}
Exploit Maturity (cvss:E:2.0.0)
This metric measures the likelihood of the vulnerability being attacked, and is based on the current state of exploit techniques, exploit code availability, or active, “in-the-wild” exploitation.
Value | Definition |
---|---|
Unreported (U) | Based on available threat intelligence each of the following must apply: No knowledge of publicly available proof-of-concept exploit code No knowledge of reported attempts to exploit this vulnerability No knowledge of publicly available solutions used to simplify attempts to exploit the vulnerability (i.e., neither the “POC” nor “Attacked” values apply) |
Proof-of-Concept (P) | Based on available threat intelligence each of the following must apply: Proof-of-concept exploit code is publicly available No knowledge of reported attempts to exploit this vulnerability No knowledge of publicly available solutions used to simplify attempts to exploit the vulnerability (i.e., the “Attacked” value does not apply) |
Attacked (A) | Based on available threat intelligence either of the following must apply: Attacks targeting this vulnerability (attempted or successful) have been reported Solutions to simplify attempts to exploit the vulnerability are publicly or privately available (such as exploit toolkits) |
Not Defined (X) | This metric value is not defined. See CVSS documentation for details. |
Exploit Maturity (cvss:E:2.0.0) JSON Example
{
"namespace": "cvss",
"key": "E",
"version": "2.0.0",
"name": "Exploit Maturity",
"definition": "This metric measures the likelihood of the vulnerability being attacked, and is based on the current state of exploit techniques, exploit code availability, or active, “in-the-wild” exploitation.",
"schemaVersion": "2.0.0",
"values": [
{
"key": "U",
"name": "Unreported",
"definition": "Based on available threat intelligence each of the following must apply: No knowledge of publicly available proof-of-concept exploit code No knowledge of reported attempts to exploit this vulnerability No knowledge of publicly available solutions used to simplify attempts to exploit the vulnerability (i.e., neither the “POC” nor “Attacked” values apply)"
},
{
"key": "P",
"name": "Proof-of-Concept",
"definition": "Based on available threat intelligence each of the following must apply: Proof-of-concept exploit code is publicly available No knowledge of reported attempts to exploit this vulnerability No knowledge of publicly available solutions used to simplify attempts to exploit the vulnerability (i.e., the “Attacked” value does not apply)"
},
{
"key": "A",
"name": "Attacked",
"definition": "Based on available threat intelligence either of the following must apply: Attacks targeting this vulnerability (attempted or successful) have been reported Solutions to simplify attempts to exploit the vulnerability are publicly or privately available (such as exploit toolkits)"
},
{
"key": "X",
"name": "Not Defined",
"definition": "This metric value is not defined. See CVSS documentation for details."
}
]
}
EPSS on Probability Binning
In a blog post on the EPSS website, the EPSS SIG discusses the challenges of binning probabilities.
EPSS SIG on Binning
However, there are a number of problems with binning. Bins are, by construction, subjective transformations of, in this case, a cardinal probability scale. And because the bins are subjectively defined, there is room for disagreement and misalignment across different users. There is no universal "right" answer to what the cut off should be between a high, and medium, or medium and low.
Moreover, arbitrary cutoffs force two scores, which may be separated by the tiniest of a value, to be labeled and then handled differently, despite there being no practical difference between them. For example, if two bins are set and the cutoff is set at 0.5, two vulnerabilities with probabilities of 0.499 and 0.501 would be treated just the same as two vulnerabilities with probabilities of 0.001 and 0.999. This kind of range compression is unavoidable and so any benefits from this kind of mental shortcut must be weighed against the information loss inevitable with binning.
For these reasons, EPSS does not currently bin EPSS scores using labels.
From a data provider perspective, this makes sense. Avoiding information loss early in the information pipeline is a good idea. However, from a data consumer perspective, and especially when one is making a choice between a finite number of options (as in SSVC), binning can be a useful tool to reduce the complexity of the decision space.
Binning Probabilities
We have also provided a few basic SSVC Decision Points to capture probability-based information in different ways. Because SSVC is based on categorical decision points, we need to bin the continuous probability scores into discrete categories. However, as the EPSS SIG points out (see sidebar), there are always tradeoffs involved in binning. That's why we provide several different options for binning probabilities so that SSVC users can choose one that best fits their needs (or create their own if none of the provided options is suitable). Expand the example below to see the currently available options.
Exploring Decision Points for Binning Probabilities
We provide a few different decision points based on probability bins. You might look these over and choose one that fits your needs.
Probability Scale in 2 equal levels, ascending (basic:P_2A:1.0.0)
A probability scale that divides between less than 50% and greater than or equal to 50%
Value | Definition |
---|---|
Less than 50% (LT50) | 0.0 <= Probability < 0.5 |
Greater than 50% (GT50) | 0.5 <= Probability <= 1.0 |
Probability Scale in 2 equal levels, ascending (basic:P_2A:1.0.0) JSON Example
{
"namespace": "basic",
"key": "P_2A",
"version": "1.0.0",
"name": "Probability Scale in 2 equal levels, ascending",
"definition": "A probability scale that divides between less than 50% and greater than or equal to 50%",
"schemaVersion": "2.0.0",
"values": [
{
"key": "LT50",
"name": "Less than 50%",
"definition": "0.0 <= Probability < 0.5"
},
{
"key": "GT50",
"name": "Greater than 50%",
"definition": "0.5 <= Probability <= 1.0"
}
]
}
Probability Scale in 5 equal levels, ascending (basic:P_5A:1.0.0)
A probability scale with 20% increments
Value | Definition |
---|---|
Less than 20% (P0_20) | Probability < 0.2 |
20% to 40% (P20_40) | 0.2 <= Probability < 0.4 |
40% to 60% (P40_60) | 0.4 <= Probability < 0.6 |
60% to 80% (P60_80) | 0.6 <= Probability < 0.8 |
Greater than 80% (P80_100) | 0.8 <= Probability <= 1.0 |
Probability Scale in 5 equal levels, ascending (basic:P_5A:1.0.0) JSON Example
{
"namespace": "basic",
"key": "P_5A",
"version": "1.0.0",
"name": "Probability Scale in 5 equal levels, ascending",
"definition": "A probability scale with 20% increments",
"schemaVersion": "2.0.0",
"values": [
{
"key": "P0_20",
"name": "Less than 20%",
"definition": "Probability < 0.2"
},
{
"key": "P20_40",
"name": "20% to 40%",
"definition": "0.2 <= Probability < 0.4"
},
{
"key": "P40_60",
"name": "40% to 60%",
"definition": "0.4 <= Probability < 0.6"
},
{
"key": "P60_80",
"name": "60% to 80%",
"definition": "0.6 <= Probability < 0.8"
},
{
"key": "P80_100",
"name": "Greater than 80%",
"definition": "0.8 <= Probability <= 1.0"
}
]
}
Probability Scale in 5 weighted levels, ascending (basic:P_5W:1.0.0)
A probability scale with higher resolution as probability increases
Value | Definition |
---|---|
Less than 30% (P0_30) | Probability < 0.3 |
30% to 55% (P30_55) | 0.3 <= Probability < 0.55 |
55% to 75% (P55_75) | 0.55 <= Probability < 0.75 |
75% to 90% (P75_90) | 0.75 <= Probability < 0.9 |
Greater than 90% (P90_100) | 0.9 <= Probability <= 1.0 |
Probability Scale in 5 weighted levels, ascending (basic:P_5W:1.0.0) JSON Example
{
"namespace": "basic",
"key": "P_5W",
"version": "1.0.0",
"name": "Probability Scale in 5 weighted levels, ascending",
"definition": "A probability scale with higher resolution as probability increases",
"schemaVersion": "2.0.0",
"values": [
{
"key": "P0_30",
"name": "Less than 30%",
"definition": "Probability < 0.3"
},
{
"key": "P30_55",
"name": "30% to 55%",
"definition": "0.3 <= Probability < 0.55"
},
{
"key": "P55_75",
"name": "55% to 75%",
"definition": "0.55 <= Probability < 0.75"
},
{
"key": "P75_90",
"name": "75% to 90%",
"definition": "0.75 <= Probability < 0.9"
},
{
"key": "P90_100",
"name": "Greater than 90%",
"definition": "0.9 <= Probability <= 1.0"
}
]
}
CIS-CTI Words of Estimative Probability (basic:CIS_WEP:1.0.0)
A scale for expressing the likelihood of an event or outcome.
Value | Definition |
---|---|
Almost No Chance (ANC) | Probability < 0.05. Almost no chance, remote |
Very Unlikely (VU) | 0.05 <= Probability < 0.20. Very unlikely, highly improbable. |
Unlikely (U) | 0.20 <= Probability < 0.45. Unlikely, improbable. |
Roughly Even Chance (REC) | 0.45 <= Probability < 0.55. Roughly even chance, roughly even odds. |
Likely (L) | 0.55 <= Probability < 0.80. Likely, probable. |
Very Likely (VL) | 0.80 <= Probability < 0.95. Very likely, highly probable. |
Almost Certain (AC) | 0.95 <= Probability. Almost certain, nearly certain. |
CIS-CTI Words of Estimative Probability (basic:CIS_WEP:1.0.0) JSON Example
{
"namespace": "basic",
"key": "CIS_WEP",
"version": "1.0.0",
"name": "CIS-CTI Words of Estimative Probability",
"definition": "A scale for expressing the likelihood of an event or outcome.",
"schemaVersion": "2.0.0",
"values": [
{
"key": "ANC",
"name": "Almost No Chance",
"definition": "Probability < 0.05. Almost no chance, remote"
},
{
"key": "VU",
"name": "Very Unlikely",
"definition": "0.05 <= Probability < 0.20. Very unlikely, highly improbable."
},
{
"key": "U",
"name": "Unlikely",
"definition": "0.20 <= Probability < 0.45. Unlikely, improbable."
},
{
"key": "REC",
"name": "Roughly Even Chance",
"definition": "0.45 <= Probability < 0.55. Roughly even chance, roughly even odds."
},
{
"key": "L",
"name": "Likely",
"definition": "0.55 <= Probability < 0.80. Likely, probable."
},
{
"key": "VL",
"name": "Very Likely",
"definition": "0.80 <= Probability < 0.95. Very likely, highly probable."
},
{
"key": "AC",
"name": "Almost Certain",
"definition": "0.95 <= Probability. Almost certain, nearly certain."
}
]
}
For this example, let's say you decide to use the Probability Scale in 5 weighted levels, ascending decision point:
Probability Scale in 5 weighted levels, ascending (basic:P_5W:1.0.0)
A probability scale with higher resolution as probability increases
Value | Definition |
---|---|
Less than 30% (P0_30) | Probability < 0.3 |
30% to 55% (P30_55) | 0.3 <= Probability < 0.55 |
55% to 75% (P55_75) | 0.55 <= Probability < 0.75 |
75% to 90% (P75_90) | 0.75 <= Probability < 0.9 |
Greater than 90% (P90_100) | 0.9 <= Probability <= 1.0 |
Probability Scale in 5 weighted levels, ascending (basic:P_5W:1.0.0) JSON Example
{
"namespace": "basic",
"key": "P_5W",
"version": "1.0.0",
"name": "Probability Scale in 5 weighted levels, ascending",
"definition": "A probability scale with higher resolution as probability increases",
"schemaVersion": "2.0.0",
"values": [
{
"key": "P0_30",
"name": "Less than 30%",
"definition": "Probability < 0.3"
},
{
"key": "P30_55",
"name": "30% to 55%",
"definition": "0.3 <= Probability < 0.55"
},
{
"key": "P55_75",
"name": "55% to 75%",
"definition": "0.55 <= Probability < 0.75"
},
{
"key": "P75_90",
"name": "75% to 90%",
"definition": "0.75 <= Probability < 0.9"
},
{
"key": "P90_100",
"name": "Greater than 90%",
"definition": "0.9 <= Probability <= 1.0"
}
]
}
With our exploitation and probability binning decision points in hand, we can now consider how to combine them in a decision table to get a more nuanced view of exploitation risk.
Designing an Exploitation-focused Decision Table
Let's say you decide to create a new Decision Table that combines the EPSS probability information with the other exploitation-related decision points to determine a more informed outcome using the SSVC Exploitation decision point.
As a reminder, the SSVC Exploitation decision point has the following values:
Exploitation (ssvc:E:1.1.0)
The present state of exploitation of the vulnerability.
Value | Definition |
---|---|
None (N) | There is no evidence of active exploitation and no public proof of concept (PoC) of how to exploit the vulnerability. |
Public PoC (P) | One of the following is true: (1) Typical public PoC exists in sources such as Metasploit or websites like ExploitDB; or (2) the vulnerability has a well-known method of exploitation. |
Active (A) | Shared, observable, reliable evidence that the exploit is being used in the wild by real attackers; there is credible public reporting. |
Exploitation (ssvc:E:1.1.0) JSON Example
{
"namespace": "ssvc",
"key": "E",
"version": "1.1.0",
"name": "Exploitation",
"definition": "The present state of exploitation of the vulnerability.",
"schemaVersion": "2.0.0",
"values": [
{
"key": "N",
"name": "None",
"definition": "There is no evidence of active exploitation and no public proof of concept (PoC) of how to exploit the vulnerability."
},
{
"key": "P",
"name": "Public PoC",
"definition": "One of the following is true: (1) Typical public PoC exists in sources such as Metasploit or websites like ExploitDB; or (2) the vulnerability has a well-known method of exploitation."
},
{
"key": "A",
"name": "Active",
"definition": "Shared, observable, reliable evidence that the exploit is being used in the wild by real attackers; there is credible public reporting."
}
]
}
In conversations with your organization's risk owners, you determine that you should focus your vulnerability management efforts on the vulnerabilities that are either already being actively exploited or are very likely to be exploited soon.
You decide to apply the following rules:
Rule | Description |
---|---|
Prioritize vuls in KEV as Active | If the vulnerability is in the CISA KEV, set the SSVC Exploitation value to Active. |
Treat very high EPSS probabilities as already Active | If the EPSS probability is >90%, set SSVC Exploitation value to Active. |
Amplify high EPSS probabilities | If the EPSS probability is 75-90%, bump the SSVC Exploitation value up one category across the board. |
Assume Public PoCs are Active when EPSS probability is more likely than not. | If the EPSS probability is 55-75%, bump SSVC Exploitation = Public PoC to Active |
Default: Use CVSS Exploit Maturity | By default, use the CVSS Exploit Maturity value to set the SSVC Exploitation value, unless one of the other rules apply. |
After constructing the decision table according to these rules, you end up with the following table of values:
Row | Exploit Maturity (without Not Defined) v2.0.0 (cvss) | In KEV v1.0.0 (cisa) | Probability Scale in 5 weighted levels, ascending v1.0.0 (basic) | Exploitation v1.1.0 |
---|---|---|---|---|
0 | unreported | no | less than 30% | none |
1 | proof-of-concept | no | less than 30% | public poc |
2 | unreported | yes | less than 30% | active |
3 | unreported | no | 30% to 55% | none |
4 | attacked | no | less than 30% | active |
5 | proof-of-concept | yes | less than 30% | active |
6 | proof-of-concept | no | 30% to 55% | public poc |
7 | unreported | yes | 30% to 55% | active |
8 | unreported | no | 55% to 75% | none |
9 | attacked | yes | less than 30% | active |
10 | attacked | no | 30% to 55% | active |
11 | proof-of-concept | yes | 30% to 55% | active |
12 | proof-of-concept | no | 55% to 75% | active |
13 | unreported | yes | 55% to 75% | active |
14 | unreported | no | 75% to 90% | public poc |
15 | attacked | yes | 30% to 55% | active |
16 | attacked | no | 55% to 75% | active |
17 | proof-of-concept | yes | 55% to 75% | active |
18 | proof-of-concept | no | 75% to 90% | active |
19 | unreported | yes | 75% to 90% | active |
20 | unreported | no | greater than 90% | active |
21 | attacked | yes | 55% to 75% | active |
22 | attacked | no | 75% to 90% | active |
23 | proof-of-concept | yes | 75% to 90% | active |
24 | proof-of-concept | no | greater than 90% | active |
25 | unreported | yes | greater than 90% | active |
26 | attacked | yes | 75% to 90% | active |
27 | attacked | no | greater than 90% | active |
28 | proof-of-concept | yes | greater than 90% | active |
29 | attacked | yes | greater than 90% | active |
A diagram of the decision model is shown below.
Example Decision Table Diagram
The diagram below shows the decision model for this example. Each path through the diagram corresponds to a row in the table above.
---
title: Exploitation Data Integration Example Decision Table (example:DT_EXP:1.0.0)
---
graph LR
subgraph inputs[Inputs]
n1(( ))
subgraph s1["cvss:E_NoX:2.0.0"]
U_L0([U])
P_L0([P])
A_L0([A])
end
subgraph s2["cisa:KEV:1.0.0"]
U_N_L1([N])
P_N_L1([N])
U_Y_L1([Y])
A_N_L1([N])
P_Y_L1([Y])
A_Y_L1([Y])
end
subgraph s3["basic:P_5W:1.0.0"]
U_N_P0_30_L2([P0_30])
P_N_P0_30_L2([P0_30])
U_Y_P0_30_L2([P0_30])
U_N_P30_55_L2([P30_55])
A_N_P0_30_L2([P0_30])
P_Y_P0_30_L2([P0_30])
P_N_P30_55_L2([P30_55])
U_Y_P30_55_L2([P30_55])
U_N_P55_75_L2([P55_75])
A_Y_P0_30_L2([P0_30])
A_N_P30_55_L2([P30_55])
P_Y_P30_55_L2([P30_55])
P_N_P55_75_L2([P55_75])
U_Y_P55_75_L2([P55_75])
U_N_P75_90_L2([P75_90])
A_Y_P30_55_L2([P30_55])
A_N_P55_75_L2([P55_75])
P_Y_P55_75_L2([P55_75])
P_N_P75_90_L2([P75_90])
U_Y_P75_90_L2([P75_90])
U_N_P90_100_L2([P90_100])
A_Y_P55_75_L2([P55_75])
A_N_P75_90_L2([P75_90])
P_Y_P75_90_L2([P75_90])
P_N_P90_100_L2([P90_100])
U_Y_P90_100_L2([P90_100])
A_Y_P75_90_L2([P75_90])
A_N_P90_100_L2([P90_100])
P_Y_P90_100_L2([P90_100])
A_Y_P90_100_L2([P90_100])
end
end
subgraph outputs[Outcome]
subgraph s4["ssvc:E:1.1.0"]
U_N_P0_30_N_L3([N])
P_N_P0_30_P_L3([P])
U_Y_P0_30_A_L3([A])
U_N_P30_55_N_L3([N])
A_N_P0_30_A_L3([A])
P_Y_P0_30_A_L3([A])
P_N_P30_55_P_L3([P])
U_Y_P30_55_A_L3([A])
U_N_P55_75_N_L3([N])
A_Y_P0_30_A_L3([A])
A_N_P30_55_A_L3([A])
P_Y_P30_55_A_L3([A])
P_N_P55_75_A_L3([A])
U_Y_P55_75_A_L3([A])
U_N_P75_90_P_L3([P])
A_Y_P30_55_A_L3([A])
A_N_P55_75_A_L3([A])
P_Y_P55_75_A_L3([A])
P_N_P75_90_A_L3([A])
U_Y_P75_90_A_L3([A])
U_N_P90_100_A_L3([A])
A_Y_P55_75_A_L3([A])
A_N_P75_90_A_L3([A])
P_Y_P75_90_A_L3([A])
P_N_P90_100_A_L3([A])
U_Y_P90_100_A_L3([A])
A_Y_P75_90_A_L3([A])
A_N_P90_100_A_L3([A])
P_Y_P90_100_A_L3([A])
A_Y_P90_100_A_L3([A])
end
end
n1 --- U_L0
n1 --- P_L0
n1 --- A_L0
U_L0 --- U_N_L1
U_N_L1 --- U_N_P0_30_L2
U_N_P0_30_L2 --- U_N_P0_30_N_L3
P_L0 --- P_N_L1
P_N_L1 --- P_N_P0_30_L2
P_N_P0_30_L2 --- P_N_P0_30_P_L3
U_L0 --- U_Y_L1
U_Y_L1 --- U_Y_P0_30_L2
U_Y_P0_30_L2 --- U_Y_P0_30_A_L3
U_N_L1 --- U_N_P30_55_L2
U_N_P30_55_L2 --- U_N_P30_55_N_L3
A_L0 --- A_N_L1
A_N_L1 --- A_N_P0_30_L2
A_N_P0_30_L2 --- A_N_P0_30_A_L3
P_L0 --- P_Y_L1
P_Y_L1 --- P_Y_P0_30_L2
P_Y_P0_30_L2 --- P_Y_P0_30_A_L3
P_N_L1 --- P_N_P30_55_L2
P_N_P30_55_L2 --- P_N_P30_55_P_L3
U_Y_L1 --- U_Y_P30_55_L2
U_Y_P30_55_L2 --- U_Y_P30_55_A_L3
U_N_L1 --- U_N_P55_75_L2
U_N_P55_75_L2 --- U_N_P55_75_N_L3
A_L0 --- A_Y_L1
A_Y_L1 --- A_Y_P0_30_L2
A_Y_P0_30_L2 --- A_Y_P0_30_A_L3
A_N_L1 --- A_N_P30_55_L2
A_N_P30_55_L2 --- A_N_P30_55_A_L3
P_Y_L1 --- P_Y_P30_55_L2
P_Y_P30_55_L2 --- P_Y_P30_55_A_L3
P_N_L1 --- P_N_P55_75_L2
P_N_P55_75_L2 --- P_N_P55_75_A_L3
U_Y_L1 --- U_Y_P55_75_L2
U_Y_P55_75_L2 --- U_Y_P55_75_A_L3
U_N_L1 --- U_N_P75_90_L2
U_N_P75_90_L2 --- U_N_P75_90_P_L3
A_Y_L1 --- A_Y_P30_55_L2
A_Y_P30_55_L2 --- A_Y_P30_55_A_L3
A_N_L1 --- A_N_P55_75_L2
A_N_P55_75_L2 --- A_N_P55_75_A_L3
P_Y_L1 --- P_Y_P55_75_L2
P_Y_P55_75_L2 --- P_Y_P55_75_A_L3
P_N_L1 --- P_N_P75_90_L2
P_N_P75_90_L2 --- P_N_P75_90_A_L3
U_Y_L1 --- U_Y_P75_90_L2
U_Y_P75_90_L2 --- U_Y_P75_90_A_L3
U_N_L1 --- U_N_P90_100_L2
U_N_P90_100_L2 --- U_N_P90_100_A_L3
A_Y_L1 --- A_Y_P55_75_L2
A_Y_P55_75_L2 --- A_Y_P55_75_A_L3
A_N_L1 --- A_N_P75_90_L2
A_N_P75_90_L2 --- A_N_P75_90_A_L3
P_Y_L1 --- P_Y_P75_90_L2
P_Y_P75_90_L2 --- P_Y_P75_90_A_L3
P_N_L1 --- P_N_P90_100_L2
P_N_P90_100_L2 --- P_N_P90_100_A_L3
U_Y_L1 --- U_Y_P90_100_L2
U_Y_P90_100_L2 --- U_Y_P90_100_A_L3
A_Y_L1 --- A_Y_P75_90_L2
A_Y_P75_90_L2 --- A_Y_P75_90_A_L3
A_N_L1 --- A_N_P90_100_L2
A_N_P90_100_L2 --- A_N_P90_100_A_L3
P_Y_L1 --- P_Y_P90_100_L2
P_Y_P90_100_L2 --- P_Y_P90_100_A_L3
A_Y_L1 --- A_Y_P90_100_L2
A_Y_P90_100_L2 --- A_Y_P90_100_A_L3
And here is a JSON object representation of the decision table for programmatic use:
Example Decision Table JSON
The JSON representation of the decision table is shown below.
{
"namespace": "example",
"key": "DT_EXP",
"version": "1.0.0",
"name": "Exploitation Data Integration Example",
"definition": "An example decision table that uses multiple exploitation-related decision points, including EPSS probability",
"schemaVersion": "2.0.0",
"registered": false,
"decision_points": {
"cvss:E_NoX:2.0.0": {
"namespace": "cvss",
"key": "E_NoX",
"version": "2.0.0",
"name": "Exploit Maturity (without Not Defined)",
"definition": "This metric measures the likelihood of the vulnerability being attacked, and is based on the current state of exploit techniques, exploit code availability, or active, “in-the-wild” exploitation. This version does not include the Not Defined (X) option.",
"schemaVersion": "2.0.0",
"values": [
{
"key": "U",
"name": "Unreported",
"definition": "Based on available threat intelligence each of the following must apply: No knowledge of publicly available proof-of-concept exploit code No knowledge of reported attempts to exploit this vulnerability No knowledge of publicly available solutions used to simplify attempts to exploit the vulnerability (i.e., neither the “POC” nor “Attacked” values apply)"
},
{
"key": "P",
"name": "Proof-of-Concept",
"definition": "Based on available threat intelligence each of the following must apply: Proof-of-concept exploit code is publicly available No knowledge of reported attempts to exploit this vulnerability No knowledge of publicly available solutions used to simplify attempts to exploit the vulnerability (i.e., the “Attacked” value does not apply)"
},
{
"key": "A",
"name": "Attacked",
"definition": "Based on available threat intelligence either of the following must apply: Attacks targeting this vulnerability (attempted or successful) have been reported Solutions to simplify attempts to exploit the vulnerability are publicly or privately available (such as exploit toolkits)"
}
]
},
"cisa:KEV:1.0.0": {
"namespace": "cisa",
"key": "KEV",
"version": "1.0.0",
"name": "In KEV",
"definition": "Denotes whether a vulnerability is in the CISA Known Exploited Vulnerabilities (KEV) list.",
"schemaVersion": "2.0.0",
"values": [
{
"key": "N",
"name": "No",
"definition": "Vulnerability is not listed in KEV."
},
{
"key": "Y",
"name": "Yes",
"definition": "Vulnerability is listed in KEV."
}
]
},
"basic:P_5W:1.0.0": {
"namespace": "basic",
"key": "P_5W",
"version": "1.0.0",
"name": "Probability Scale in 5 weighted levels, ascending",
"definition": "A probability scale with higher resolution as probability increases",
"schemaVersion": "2.0.0",
"values": [
{
"key": "P0_30",
"name": "Less than 30%",
"definition": "Probability < 0.3"
},
{
"key": "P30_55",
"name": "30% to 55%",
"definition": "0.3 <= Probability < 0.55"
},
{
"key": "P55_75",
"name": "55% to 75%",
"definition": "0.55 <= Probability < 0.75"
},
{
"key": "P75_90",
"name": "75% to 90%",
"definition": "0.75 <= Probability < 0.9"
},
{
"key": "P90_100",
"name": "Greater than 90%",
"definition": "0.9 <= Probability <= 1.0"
}
]
},
"ssvc:E:1.1.0": {
"namespace": "ssvc",
"key": "E",
"version": "1.1.0",
"name": "Exploitation",
"definition": "The present state of exploitation of the vulnerability.",
"schemaVersion": "2.0.0",
"values": [
{
"key": "N",
"name": "None",
"definition": "There is no evidence of active exploitation and no public proof of concept (PoC) of how to exploit the vulnerability."
},
{
"key": "P",
"name": "Public PoC",
"definition": "One of the following is true: (1) Typical public PoC exists in sources such as Metasploit or websites like ExploitDB; or (2) the vulnerability has a well-known method of exploitation."
},
{
"key": "A",
"name": "Active",
"definition": "Shared, observable, reliable evidence that the exploit is being used in the wild by real attackers; there is credible public reporting."
}
]
}
},
"outcome": "ssvc:E:1.1.0",
"mapping": [
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P0_30",
"ssvc:E:1.1.0": "N"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P0_30",
"ssvc:E:1.1.0": "P"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P0_30",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P30_55",
"ssvc:E:1.1.0": "N"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P0_30",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P0_30",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P30_55",
"ssvc:E:1.1.0": "P"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P30_55",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P55_75",
"ssvc:E:1.1.0": "N"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P0_30",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P30_55",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P30_55",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P55_75",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P55_75",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P75_90",
"ssvc:E:1.1.0": "P"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P30_55",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P55_75",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P55_75",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P75_90",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P75_90",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P90_100",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P55_75",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P75_90",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P75_90",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P90_100",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "U",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P90_100",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P75_90",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "N",
"basic:P_5W:1.0.0": "P90_100",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "P",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P90_100",
"ssvc:E:1.1.0": "A"
},
{
"cvss:E_NoX:2.0.0": "A",
"cisa:KEV:1.0.0": "Y",
"basic:P_5W:1.0.0": "P90_100",
"ssvc:E:1.1.0": "A"
}
]
}
Now you've created a clear way to combine EPSS probability scores with other exploitation-related information to inform your SSVC decisions downstream.
Conclusion
In this How-To, we've explored how to combine EPSS probability scores with other exploitation-related information in an SSVC decision table. By thoughtfully designing decision points and tables, you can create a more nuanced and effective vulnerability management strategy that prioritizes risks based on the likelihood of exploitation.