Fair and Accurate Security Ratings: The Peculiar Case of Passive Patch Pronouncements

In a previous article, I spoke about receiving Security Rating reports with hundreds of pages findings. Many, if not all, Security Ratings reports contain findings and ratings of an assessed organization’s software patching cadence. We’ve all been reminded to ‘patch early and often’ and these Security Ratings reports are no different. In fact, for some Ratings vendors, the ratings for software patching can account for around 25% of the scoring. That’s a big chunk of an assessed organization’s rating with great influence to help or harm the assessed organization, particularly if the data isn’t accurate.

“High Severity Vulnerabilities found”

Imagine viewing a report that contains the data types above, associated with pages upon pages of findings of software patching vulnerabilities. Imagine that a company you are evaluating is the subject of this report and ask yourself the following questions:

  • What applications are impacted here?
  • Is my company’s use of those impacted applications critical?
  • Is my company’s data adversely impacted by these findings?
  • Any idea what OS platform is being used, and does it matter?
  • Is the IP address owned by the organization or is it a dynamic IP from a third party provider?
  • What level of confidence do you have in this data? What level do you think the Security Ratings vendor has?
  • What methodology was used for collecting this data, and where’s the contextual evidence to support it?

If you’re a non-technical decision-maker, your eyes probably glaze over and you just move on. These questions may not even occur to you to ask. You may simply view the size of the report as evidence that, “there’s just too much evidence that the assessed organization isn’t patching; therefore, their patching practices are bad, and that must mean they have bad processes everywhere.” Time to re-think the relationship.

If you’re an Information Security professional evaluating another company, you might spend a few extra minutes to see if there’s something that stands out in the data.

However, if you’re an Information Security professional evaluating your own company that has implemented open source operating systems like Fedora, Centos, RHEL, or Ubuntu, you might look at this a bit differently.

Below is an excerpt from the FAQ section on “A PCI audit says I am running a version which has CVE exploits in it by CentOs.

Security patches and bug fixes are backported into the shipped version. See here for details: https://access.redhat.com/security/updates/backporting. Simply reading a version number on a package or a banner from network scanning is not sufficient to indicate a vulnerability, in light of this approach. Most reputable vendors understand this, but some seem to not account for the upstream approach in their product’s reporting interface.
If a scan report is complaining about package versions, the person providing it is probably not doing it right, as the popular meme goes. CentOS and its upstream are continuously updated, and the CVE’s addressed are reflected in the aforementioned changelog, so running a protective backup, updating, and rebooting or restarting the affected daemon service should address the matter. Other approaches, such as using one keyed to package version numbers, are simply wrong.

While Centos and Red Hat have had these articles in place for some time, Red Hat recently felt the need to add greater specificity in addressing backporting and the Security Ratings industry.

Security Ratings organizations often talk about the “volume of data,” such as the millions of critical data points they collect and analyze. They also often claim to have the greatest breadth and quality of intelligence available — all with less than 1% of false positives. However, without a published methodology or any contextual evidence to support the findings, there really is no assurance of their statements as fact, other than glossy marketing statements regurgitated by sales.

Don’t take my word for it

It would appear that security rating vendors think that all networks run on Windows, and fit neatly into uniform boxes. However, even with published evidence (such as the articles referenced above) that there are servers with backported security fixes connected to the internet, and that their analysis and ratings processes are inaccurate, the Security Ratings industry is still doing it wrong.

Security metrics shouldn’t be reduced to the unreliable banner scraping of software versions. But, for Security Ratings vendors, it seems that is just what they have done.

Here’s another example:

If your eyes were immediately drawn to that number on the right, I wouldn’t blame you. It’s easy to create a bias based on an artificial statistic that has a large number. In this case, the author wishes you to know that this host hasn’t been patched in over 7 years.

Or has it?

Some industry colleagues and I wanted to see how bad these Security Rating findings really are. Are there truly millions of data points being used and do they really reflect the truth? Or is this something more…. Simple.

We secretly replaced banner information on Linux machines running applications such as Apache, with an alternative string of text such as IIS/4.0 and watched to see what would happen.

Within days of this simple text modification (that anyone can perform without impacting the operation of the application or system), the hosts started populating in the reporting of Security Ratings vendors being rated against the modified header information.

An executive for a Security Ratings vendor recently said:

“We want to understand the state of every device that’s connected to the internet”
“We want to understand what is the level of security hygiene of this asset and what is the level of compromise of this asset.”

It may be a surprise to the person behind these statements, but I don’t believe they’re going to find this in the banner (even if it’s not modified). The good news is that what’s being described IS possible, but not without consent. It requires working WITH assessed organizations.

This Security Ratings executive went on to say:

“Security can’t be reduced to just: ‘I see that you have this unpatched vulnerability’. It’s more about: who have you hired to run your security, is the team competent, do they have processes and procedures in place?”

However, none of these factors are captured or evaluated by Security Ratings vendors — who instead capture information passively and make determinations based solely on that passive information.

The Ratings Vendor Response

When bringing this information regarding backporting to one Security Ratings vendor, their initial response was that “it is hard to discover these patches and specifically the CVE’s they patch.”

This vendor found this particular problem so difficult in fact, that instead of stopping, investigating, implementing and testing a proper fix, they simply suppressed the alerts. This means that while data was still visibly showing up in their platform as identifying vulnerabilities, the findings were not being listed or rated against the assessed organization — and only that organization. If it’s out of sight, it should be out of mind, right? Perhaps for just one organization being assessed, but definitely not for everyone being rated. That to me would appear to be a pretty clear sign that they don’t care about accuracy.

Another Ratings organization stated that they primarily determine the version information based on what is visible in the data they collect, and don’t perform vulnerability scans/tests, even though their reporting states that they have used ‘passive analysis to identify systems running end-of-support software that have security vulnerabilities’. The question remains as to how they can determine that vulnerabilities exist without running vulnerability scans or tests. Instead of correctly addressing the backporting issue, they instead deemed their findings to be ‘false positives.’

How can Security Ratings organizations who put so much weight on patching also claim their results, which include these inaccuracies, as empirical and accurate? This approach is similar to taking a single IP address coming from one country, and making definitive statements about nation state attribution without using any other data to confirm.

Security Ratings organizations need to stop referring to their poor processes and lack of testing as ‘false positives.’ Further, Security Ratings organizations shouldn’t wait until an assessed organization objects to these false positives, and start being accountable for their accuracy.

The process of assessing the security posture of an organization’s infrastructure should be largely defined publicly by the Security Ratings organization’s methodology. Without a published methodology, the data and ratings cannot be considered transparent nor accurate and verifiable.

Verify until you can Trust but Verify

Many Security Ratings vendors have publicly committed to honoring the Principles for Fair and Accurate Security Ratings (and might have even put their name next to and then blogged about.) Organizations being assessed by, as well as those relying on, Security Ratings vendor reports are strongly encouraged to understand and apply the Principles and to include the following below listed points in their discussions and analysis of the report:

Accuracy and Validation:

Ratings should be empirical, data-driven, or notated as expert opinion. Rating companies should provide validation of their rating methodologies and historical performance of their models. Ratings shall promptly reflect the inclusion of corrected information upon validation.
— Principles for Fair and Accurate Security Ratings

When provided with access to third party Security Ratings reporting and data:

  • Ensure the report includes contextual, time bound evidence to support the findings within. If the report does not include this, ask the Ratings vendor to provide these details and/or explain why they are not provided.
  • With regards to patching, ask how they take security backporting into account and where it is stated in their methodology. If they do not include backporting, ask why that is not clearly articulated in the report, along with an explanation of what the backporting does in terms of changing the findings.
  • Ask where their confidence rating is in their reported findings.
  • Does the report provide a means to verify that the findings are accurate, including that the information from the banner (which is modifiable) is what it says it is on face value? How do they know it is accurate?

Transparency:

Rating companies shall provide sufficient transparency into the methodologies and types of data used to determine their ratings, including information on data origination as requested and when feasible, for customers and rated organizations to understand how ratings are derived. Any rated organization shall be allowed access to their individual rating and the data that impacts a change in their rating.
— Principles for Fair and Accurate Security Ratings

Anyone provided with access to third-party Security Ratings reporting and data should:

  • Ensure understanding of how the Ratings company arrived at their conclusions.
  • Review their methodologies and see how comprehensive they are.
  • If they do not provide their methodology either as part of the report or via a publicly-accessible link, question why they do not.
  • Ensure that you and your organization are allowed access to full and complete information related to the score, and not just a summary report that includes only a subset of measured data .

Confidentiality:

Information disclosed by a rated organization during the course of a challenged rating or dispute shall be appropriately protected. Rating companies should not publicize an individual organization’s rating. Rating companies shall not provide third parties with sensitive or confidential information on rated organizations that could lead directly to system compromise.
— Principles for Fair and Accurate Security Ratings

Organizations should think about the type of information they are viewing in these reports — even (and especially) if the findings don’t belong to them. Ratings vendors provide their customers with reports that they describe as an unprecedented, accurate analysis from millions of data points of passively captured data. This does not mean that it’s ethical to share all the collected data with anyone that pays, while restricting the same information from those being assessed. This could lead to a pay-for-exploitation business model.

Anyone provided with access to third-party Security Ratings reporting and data should:

  • Ask their Security Ratings company who they share reports and data with, and if different levels of contextual evidence are provided to assessed organizations than to customers. If not, why not?
  • Review the reports provided and determine if the information shares IP addresses and hosts in conjunction with vulnerabilities. Ask if there is a reason they are providing the IP address and hostname next to the finding they claim makes it vulnerable.
  • Ask if this information has already been disclosed to the assessed organization. If so, when, and how. If not, why not?

Transparency Through Social Engineering

For many organizations being assessed, the results and findings come as a shock, and with it, a sense of urgency to set the record straight. Without context and confidence, these bold statements that portray of a lack of operational security, can lead organizations to believe that the statements and findings are ‘ground truth’ when there is so much instability its quicksand.

It is not your responsibility to confirm or clarify the operational security posture to your attackers. By doing so you leave yourselves open to increased risk of attack.

Would your legal team honor a request by any organization to provide comprehensive data about your organization’s infrastructure and security controls that are not already published publicly? Is there a difference between an attacker attempting to socially engineer data from your organization and staff, and a Security Ratings vendor that may have a financial relationship with an organization you know?

Social engineering, in all its many forms continues to be an effective means for attackers to achieve their goals. One security vendor recently reported that they saw a 500% increase in social engineering attacks during the first half of 2018 stating, “Social engineering is increasingly the most popular way to launch email attacks. Criminals continue to find new ways to exploit the human factor.”

Through personal observation as well as confidential conversations with risk assessed companies it’s been clear that the types of data that have been found to be inaccurately represented or misleading has been great and far reaching. What has come out of the probing into Security Ratings vendors’ methodologies and practices is a question that should rattle organizations to the core:

Why don’t you just tell us?

When presented with evidence that their methodology is flawed, Security Ratings vendors offers up platitudes and apologies for misrepresenting the assessed organization. They also asserted that these can only be corrected by disclosing all of the assessed organization’s IP blocks and domains, as well as insight into other areas such as compensating controls. This was done under the guise to ensure that their customers viewed the organization correctly.
Even if all of this confidential data had been provided, the Security Ratings vendor could only assure that the finding would be marked as ‘false positives,’ and that no changes to their process or methodology would result.

As a reminder, Security Ratings organizations are using passive, non-invasive means to attain an attacker’s eye view of organizations. They tout accuracy in their data, some even saying they have a false positive rating of less than 1%, though they offer no data to support that metric. Yet multiple inaccuracies in their reporting have been presented, and their processes have not improved. If their data truly were highly accurate, why would Security Ratings vendors need your organization to provide confirmation and additional context about your infrastructure?

…Something is rotten in the state of Denmark.