Skip to content

Validating Reports

When a vendor or coordinator receives a vulnerability report, it's usually necessary to prioritize it along with other vulnerability reports already in progress, new feature development, and possibly other non-security bug fixes. As a result, there are a few considerations to be made in dealing with incoming reports. These essentially boil down to three questions:

Questions to Consider

  1. Is the report in scope?
  2. Is the report credible and valid?
  3. What priority should the report be given?

We tackle the first two questions in this section, and the third in the next section.

Scope

A CVD participant's scope determines which cases they can and should handle. A vendor's scope is usually the set of their products, along with any components their products depend on.

End-of-Life Software and Scope

For many vendors, End-of-Life (EoL) software might be out of scope by default, but sometimes it is worth considering as in-scope. For example, vendors may choose to provide fixes for vulnerabilties in EoL software if the software only recently went out of support and the vulnerability has a severe impact.

Scope for a Coordinating CSIRT

A coordinator also has a scope defined by their specific constituency and mission. For example, a CSIRT within a government agency might have a scope limited to systems operated by that agency. Other government CSIRTs might have responsibility for systems across the entire government. Some ISAOs have a scope to cover vulnerabilities in a specific infrastructure sector or industry.

Regardless what a CVD participant's scope is, it is usually easiest to determine whether a received report meets their scope criteria before proceeding to validate the report. If a report arrives and would be out of scope even if true, there will be no need to proceed with judging its credibility. CVD participants should decide what to do about out of scope reports separately, before the vulnerability coordination validation and prioritization decisions begin.

Handing off Out-of-Scope Reports

A judgement that a report is out of scope need not result in simply dropping the case. The case might be handed off to a more appropriate vendor or coordinator for whom it would be in scope, for example.

Recognizing High-Quality Reports

Not all reports are actionable. Some reports may under-specify the problem, making it difficult or impossible to reproduce. Some may contain irrelevant details. Some will be well written and concise, but many will not. Some reports could describe problems that are already known or for which a fix is already in the pipeline.

In easy cases, a simple description of the vulnerability, a screenshot, or a copy/pasted snippet of code is all that is necessary to validate that a report is likely accurate. In more complex scenarios, stronger evidence and/or more effort might be required to confirm the vulnerability. Responsive vendors should ensure analysts have access to appropriate resources to test and validate bugs, such as virtual machines (VMs), a testing network, and debuggers.

Report Credibility

SSVC's Report Credibility Decision Point

The content in this section is adapted from the CERT/CC's Stakeholder-Specific Vulnerability Categorization (SSVC) Report Credibility Decision Point documentation.

An analyst should start with a presumption of credibility and proceed toward disqualification. The reason for this is that, as a coordinator, occasionally doing a bit of extra work on a bad report is preferable to rejecting legitimate reports. This is essentially stating a preference for false positives over false negatives with respect to credibility determination.

There are no ironclad rules for this assessment, and other coordinators may find other guidance works for them. Credibility assessment topics include indicators for and against credibility, perspective, topic, and relationship to report scope.

Credibility Indicators

The credibility of a report is assessed by a balancing test. The indicators for or against are not commensurate, and so they cannot be put on a scoring scale, summed, and weighed.

Note

A report may be treated as credible when either

  1. the vendor confirms the existence of the vulnerability or
  2. independent confirmation of the vulnerability by an analyst who is neither the reporter nor the vendor.

If neither of these confirmations are available, then the report credibility depends on a balancing test among the following indicators.

Indicators for Credibility

Indicators in favor of credibility include when the report

  • is specific about what is affected
  • provides sufficient detail to reproduce the vulnerability.
  • describes an attack scenario.
  • suggests mitigations.
  • includes proof-of-concept exploit code or steps to reproduce.
  • neither exaggerates nor understates the impact.

Additionally, screenshots and videos, if provided, support the written text of the report and do not replace it.

Indicators against Credibility

Indicators against credibility include when the report

  • is “spammy” or exploitative (for example, the report is an attempt to upsell the receiver on some product or service).
  • is vague or ambiguous about which vendors, products, or versions are affected (for example, the report claims that all “cell phones” or “wifi” or “routers” are affected).
  • is vague or ambiguous about the preconditions necessary to exploit the vulnerability.
  • is vague or ambiguous about the impact if exploited.
  • exaggerates the impact if exploited.
  • makes extraordinary claims without correspondingly extraordinary evidence (for example, the report claims that exploitation could result in catastrophic damage to some critical system without a clear causal connection between the facts presented and the impacts claimed).
  • is unclear about what the attacker gains by exploiting the vulnerability. What do they get that they didn't already have? For example, an attacker with system privileges can already do lots of bad things, so a report that assumes system privileges as a precondition to exploitation needs to explain what else this gives the attacker.
  • depends on preconditions that are extremely rare in practice, and lacks adequate evidence for why those preconditions might be expected to occur (for example, the vulnerability is only exposed in certain non-default configurations—unless there is evidence that a community of practice has established a norm of such a non-default setup).
  • claims dire impact for a trivially found vulnerability. It is not impossible for this to occur, but most products and services that have been around for a while have already had their low-hanging fruit major vulnerabilities picked. One notable exception would be if the reporter applied a completely new method for finding vulnerabilities to discover the subject of the report.
  • is rambling and is more about a narrative than describing the vulnerability. One description is that the report reads like a food recipe with the obligatory search engine optimization preamble.
  • conspicuously misuses technical terminology. This is evidence that the reporter may not understand what they are talking about.
  • consists of mostly raw tool output. Fuzz testing outputs are not vulnerability reports.
  • lacks sufficient detail for someone to reproduce the vulnerability.
  • is just a link to a video or set of images, or lacks written detail while claiming “it's all in the video”. Imagery should support a written description, not replace it.
  • describes a bug with no discernible security impact.
  • fails to describe an attack scenario, and none is obvious.

Two additional indicators of non-credible reports are:

  • The reporter is known to have submitted low-quality reports in the past.
  • The analyst’s professional colleagues consider the report to be not credible.

Why isn't poor grammar on this list?

We considered adding poor grammar or spelling as an indicator of non-credibility. On further reflection, we do not recommend that poor grammar or spelling be used as an indicator of low report quality, as many reporters may not be native to the receiver's language. Poor grammar alone is not sufficient to dismiss a report as not credible. Even when poor grammar is accompanied by other indicators of non-credibility, those other indicators are sufficient to make the determination.

Credibility of what to whom

We are interested in the coordinating analyst's assessment of the credibility of a report. This is separate from the fact that a reporter probably reports something because they believe the report is credible.

The analyst should assess the credibility of the report of the vulnerability, not the claims of the impact of the vulnerability. A report may be credible in terms of the fact of the vulnerability's existence even if the stated impacts are inaccurate. However, the more extreme the stated impacts are, the more skepticism is necessary. For this reason, “exaggerating the impact if exploited” is an indicator against credibility. Furthermore, a report may be factual but not identify any security implications; such reports are bug reports, not vulnerability reports, and are considered out of scope.

A coordinator also has a scope defined by their specific constituency and mission. A report can be entirely credible yet remain out of scope for your coordination practice. Decide what to do about out of scope reports separately, before the vulnerability coordination prioritization decision begins. If a report arrives and would be out of scope even if true, there will be no need to proceed with judging its credibility.

Addressing Validation Problems

Not all reports you receive will be immediately actionable. Some reports may be vague, ambiguous, or otherwise difficult to validate. Here are a few tips for dealing with such reports:

Consider Reporter Reputation

It may be that reproducing the vulnerability is beyond the capability or time available by the first-tier recipient at the vendor or coordinator. Most often this occurs when the conditions required to exploit the vulnerability are difficult to reproduce in a test environment. In this case, the triager can weigh the reputation of the reporter against the claims being made in the report and the impact if they were to be true. You don't want to dismiss a report of a serious vulnerability just because it is unexpected. A reporter with a high reputation might give weight to an otherwise low-quality report (although in our experience finders and reporters with a high reputation tend to have earned that reputation by submitting high-quality reports).

Be cautious of low-quality reports

The possibility also exists that someone could be sending you reports to waste your time, or erroneously believes the report is much more serious than your analysis suggests. Not all reports you receive warrant your attention. It is usually reasonable to decline reports if you provide the reporter with a summary of your analysis and the ability to appeal (presumably by providing the needed clarifying information).

Follow up promptly with the reporter

If, as the recipient of a report, you encounter difficulty in reproducing the vulnerability, follow up with the reporter promptly and courteously; be sure to be specific about what you tried so that the reporter can provide effective advice. In such cases, the receiver might place the report in a wait state while additional information is requested from the reporter.

Advice for Reporters

Reporters should review Reporting to ensure the report contains enough details for the recipient to verify and reproduce a vulnerability. Be as specific as you can. Vendors that follow up with questions are doing the right thing, and attempting to validate your report; be friendly and courteous and attempt to provide as much detail and help as you can.