Fair and Accurate Security Ratings: A Crucial A B Conversation Most Don’t See
Early last year, I received word from someone that there was a problem with my security that was causing harm to others but if I paid them money they would fix it for me. No, I didn’t click on an untrusted url and get presented with the latest round of Apple/Microsoft Technical support scams. I was presented with a report containing thousands of findings from a Security Ratings vendor.
Security Ratings is a fairly new industry focused on providing public and private sector customers a means to measure the digital security posture of other organizations. There are a number of organizations in the security rating industry that are named as committed to the Principles for Fair and Accurate Security Ratings (and some that are not), which offer their customers an attacker’s eye view of a company’s security posture using various passive, non-intrusive patented proprietary processes. Many of these organizations compare their model to that of credit ratings services. Unfortunately, to date, many, if not all, organizations perform this function without the assessed organization’s knowledge or input.
The scenario typically goes like this:
A customer of a Security Ratings organization looks to them for assessment of another company. Data is collected, possibly a report generated, but a score and recommendation is presented without the assessed company’s knowledge or input.
From the findings and scoring of the Security Ratings organizations — right or wrong, the seed of reputational influence has been planted and starts to mature. From this data, accurate or not, verifiable or not, an organization chooses their next steps, but without the need, requirement, or necessity, to involve the organization being assessed. So, imagine having your credit score, without your permission or knowledge, provided to a potential or current employer, who then uses that as a sole (or major) factor in determining whether to hire or keep you on staff.
I love a good mystery as much as the next person, but when this type of A B conversation is something assessed organizations don’t see, it has the potential to do greater harm and presents an interesting dilemma that only Schrödinger could articulate.
Over the last year, I have seen numerous instances of reports from various Security Ratings vendors delivered to third party organizations with hundreds of pages of findings, containing thousands of inaccuracies in the data supporting the rating. In one case, it wasn’t until the findings were questioned and a manual review was performed that 10 percent of the report’s IP address findings alone were marked as invalid and removed, still leaving thousands of additional inaccuracies needing to be corrected. How is this type of practice an acceptable means to rate companies, especially without their knowledge?
Why getting it right matters
Unfortunately, the quality and accuracy of the data Security Ratings organizations have delivered to date has been far less than what they claim it to be, and in many cases also misleading. This negatively impacts not only the organizations being assessed (as well as their customers), but also the customers of the Security Rating organizations themselves. This is a lesser version of how credit risk agencies like Moody’s contributed to the sub-prime mortgage crisis and subsequent market crash in the late 2000s.
The capabilities and data offered by Security Ratings organizations could be defined as a motive with a universal adapter:
- Executive Reporting
- Self Assessment
- Cyber Insurance
- Vendor Risk Management
- Mergers & Acquisitions
- Threat Reconnaissance
- National Cybersecurity
- Threat Intelligence
But the solutions to those needs outlined above come with serious weight behind them, which can lead to even more serious ramifications.
Assessed organizations aren’t alerted to the fact that they are being assessed or that any group or organization has visibility into critical information about their organization. This can include information about what systems are vulnerable to what types of attacks, areas where data is leaking, social engineering targets, and so on. To date, there is no requirement through the Principles or from the ratings organizations themselves to show proof of a relationship between an assessed organization and a Security Ratings customer. This means that while this information could be used to protect organizations from making poor vendor decisions, the scope and scale can also provide attackers and even nation-state sponsored groups access to information that could be equally “actionable”.
An important question might be: who has access to this trove of data, including vulnerabilities?
Here’s an example of how this data is currently being used. Last year, AgendaWeek published an article on the “10 Most, Least Cyber-Secure Companies.” This article describes a proxy advisory firm Egan-Jones (E-J) “recommending that investors vote out chairpersons — and sometimes other directors — at companies with abysmal cyber security.” It further went on to say:
This proxy advisory rates companies based on how well they comply with cyber-security framework requirements set by the federal *National Institute of Standards and Technology* (NIST) and the *International Organization for Standardization *(ISO).
E-J hires a technology security vendor to visit corporate websites of all 6,000 U.S. and foreign companies that it covers. The vendor tests whether companies keep their computer servers updated and are on top of other factors such as network security, patching cadence, endpoint security and information leaks.
Further in that article, Kevin McManus, vice president and director of proxy services at Egan-Jones made the following statements:
“Essentially, we’re measuring whether the windows and doors are left open at the company plant from a cyber-security standpoint,”
“For companies, the worst case is a huge negative impact on the firm’s value, reputation or ability to do business.”
While there are numerous examples where poor security practices at senior levels have contributed to breaches, harm and loss to customers (and shareholders), what happens when Egan-Jones or others makes a statement or decision without accurate data?
What if the worst case for companies wasn’t ‘a huge negative impact on the firm’s value, reputation or ability to do business’ because of accurate findings but because the security ratings organization supplied bad data?
Security ratings organizations need to practice what they preach as well as show their work with data and analytical accuracy. This should also include the use of contextual evidence as well as the level of confidence they have in their findings. Additionally, the Principles surrounding security ratings need to be more than empty, unenforceable guidelines that are only there to inspire. It’s critical that they are updated to include areas such as accountability, protections for customers, engaging organizations through responsible disclosure, and a focus on improving the hygiene of the ecosystem versus the ridicule of organizations because they didn’t meet an individual ratings organization’s grade created in secret.
It’s also important that those engaged in providing security ratings be responsive when they are made aware of vulnerabilities and technical inaccuracies in their process and data and resolve these issues not just in the specific example, but change their process so it does not happen again. Such responsiveness should not take 30 days, 90 days, 6 months, or a year. It should be within a matter of days.
There are a ton of adversaries to industry out there with the intent of making money, harming and defrauding users and businesses, and achieving power and influence. The reputation of the Security Ratings industry, in name or in practice, shouldn’t be viewed through a similar lens.
There’s a lot to take in with this topic and hopefully this is just the start of the conversation and the change.