Enhanced Risk Assessment and Vulnerability Scoring for Medical Devices — Rubric for CVSS

Sunil Kumar
Deep Armor
Published in
10 min readNov 5, 2020

Introduction

The Common Vulnerability Scoring System (CVSS) is one of the most widely used frameworks for scoring security vulnerabilities. The scoring system has evolved over the years, with the latest version (at the time of writing) being CVSS 3.1.

The CVSS defines a set of metrics that can be used to qualify a vulnerability, including the attacker profile, pre-conditions for exploitation and the impact of the vulnerability defined using the CIA (Confidentiality — Integrity — Availability) triad. The outcome of using the CVSS calculator is a structured vector that captures the characteristics of the vulnerability, and a numerical score that indicates the severity.

The US Food and Drug Administration (FDA) has a “Medical Device Development Tool” (MDDT) program that allows the FDA to qualify certain tools that aid in development and evaluation of medical devices. For a tool to “qualify”, the FDA should evaluate it and concur with available supporting evidence that the tool produces scientifically-plausible measurements, and works as intended within the specified context of use. The idea behind MDDTs is to develop systematic and efficient methods that help in developing medical devices while reducing engineering burden during pre-market evaluation and audits.

[Img src]

Rubric for Applying CVSS to Medical Devices” is a newly qualified (as of October 20, 2020) MDDT tool that includes a series of structured questions to be used along with the Common Vulnerability Scoring System (CVSS) v3.0 to reliably calculate the severity of security vulnerabilities and aid in vulnerability disclosure.

Why a tailored rubric for CVSS?

The CVSS is a generic scoring system that can be applied to a broad spectrum of software and hardware products. There are eight metrics under the Base Score section, which are defined to accommodate all kinds of security threats and vulnerabilities. These definitions are intentionally product or industry agnostic, making CVSS a versatile tool that can be used for scoring a vulnerability in, say a mobile application, just as well as in a pacemaker device.

This versatility brings with it certain limitations. CVSS scoring can often be very subjective. The definitions may be interpreted and answered differently by different users. Seven out of the eight metrics have only two or three options, which sometimes limit the choices for correctly describing a vulnerability. Use of the CVSS also requires a good understanding of the affected product, its architecture, users, and information gathered, processed and stored.

Use of the “default” CVSS may lead to ambiguities and inconsistencies when used for vulnerabilities affecting healthcare and medical devices. The CVSS framework does not take into account clinical environment conditions, patient safety and other typical healthcare conditions. Often medical device manufacturers face challenges in arriving at an accurate severity score for vulnerabilities in their products. To address these challenges, produce accurate severity ratings for vulnerabilities, and to make the CVSS framework more applicable to medical devices, the MDDT has qualified a rubric developed by MITRE. This rubric includes a series of structured questions and metric definitions for each vector in the CVSS by considering the effect on the patient safety and other medical conditions.

How does it work?

The medical devices rubric for CVSS offers a series of structured questions for each vector in the CVSS framework. Based on the vector definition, new sets of questions are created to produce better vector values which constitutes the CVSS score. When analysts are not definite about the answers to certain questions, the rubric considers the worst case circumstance and provides the worst case vector value. In the structured question pattern, the rubric skips certain questions based on the answers provided by the user.

Answering the structured questions results in generation of CVSS vector values applicable particularly to the affected medical device. When the answer to a question suggests that the vulnerability might have an adverse effect on patient safety, there is an explicit notice that the analyst might need to perform a safety-oriented hazards analysis to determine whether the issue must be reported to the FDA/CDRH as covered in the Post-Market Guidance. Such items are marked as PIPS, an informal acronym that stands for “Potential Impact to Patient Safety.”

The CVSS rubric particularly concentrates on the confidentiality, integrity and availability factors by considering the different types of healthcare data and processes/functionality which potentially can be impacted by the vulnerability. There are six (6) such sections defined for evaluating the CIA impact. For each type of data or functionality, the analyst should consider if exploitation of the vulnerability enables the attacker to read, modify/delete, or prevent access to that particular data/functionality.

In addition to the series of structured questions, each portion of the CVSS vector has a Decision Flow diagram and an Extended Vector table in the rubric. The Decision Flow diagram is the graphical representation of decision logic for the series of structured questions. The Extended Vector table specifies the CVSS extended vector that results from answering the series of structured questions.

The FDA Cybersecurity MDDT guidance recommends using the rubric by taking inputs from subject matter experts in different fields, including:

  • Cybersecurity and privacy
  • Device engineering, design, and architecture
  • Patient health impact from resulting hazards
  • HDO device usage scenarios and clinical workflow impact
  • Information technology integration and interoperability

The outcome of using the rubric includes:

  1. CVSS score
  2. CVSS vector
  3. Extended vectors with similar syntax to CVSS vector but with each measure’s code beginning with an ‘X’
[ Example of an extended vector (XACL) to determine Attack Complexity. Source: MITRE rubric ]

Advantages

Using the supplemental rubric with CVSS 3.0 helps in making the generic CVSS scoring system more relevant in calculating the severity of vulnerabilities for medical devices. The rubric introduces a series of extended vector elements (all of them starting with an ‘X’), and has crisply defined each of them in the reference document. There are numerous examples and questions that assist in arriving at an accurate result for the CVSS metrics — for instance, an attacker exploiting a vulnerability over Bluetooth LE is a ‘Local’ attacker, while the actor would be ‘Adjacent’ if they are using Bluetooth. These detailed explanations and examples eliminate ambiguities and subjectivities to a large extent and, in our opinion, make this a very useful tool for scoring medical device vulnerabilities.

Other advantages include:

  1. Users may reach consensus on the vulnerability severity more easily and quickly with the rubric, than without it.
  2. Provides greater clarity and a more systematic approach in making the CVSS assignment.
  3. Rubric assignments/scores accurately reflects the severity of the vulnerability in medical devices.
  4. Refines the vulnerability grading discussions, forces the teams to think systematically, and makes the scoring process more repeatable and consistent.
  5. The tool has undergone multiple revisions to accommodate sponsor study feedback.
  6. Accommodates healthcare and patient safety impact.
  7. The rubric provides more than a number and the vector: it documents the scoring process and accommodates the clinical end-user environment in assessing exploitability and technical impacts.
  8. Does not reinvent the wheel: Uses the industry-accepted CVSS 3.0 standard and enhances it with extended vectors to make it more accurate for medical device vulnerabilities.

Questions and Gaps

  1. Sponsor studies have been done on vulnerabilities in infusion pumps, insulin pumps, radiological imaging devices, implantable cardiovascular devices, patient programmers for neuro-stimulators, and dialysis devices. The idea is to cover: personally worn devices, hospital infrastructure, implants, marketed product and specialized programmers used in physician offices. This may not encompass all forms of medical devices. The rubric may not work very well for certain types of healthcare and medical device platforms — more market study may be required.
  2. There will remain some variability in the way a vulnerability’s impact is perceived since each stakeholder has a different ‘loss’ calculus, risk appetite and risk management process.
  3. User feedback and data are still relatively thin for the rubric.
  4. This technique is not suitable for estimating the impact and urgency of a ‘chained’ vulnerability attack.
  5. Cannot be used for rating security risk and likelihood of exploit. Deep Armor recommends the use of the OWASP Risk Rating Calculator for this purpose.
  6. The rubric is more complex and detailed than CVSS alone, thus requiring additional time and effort for using it.
  7. This rubric is also appearing in FDA pre-market submissions. This seems to be a way for manufacturers to ‘justify’ in a submission the engineering approach taken for addressing a given theoretical cyber-weakness in their design. However, the guidance recommends FDA to not qualify such use in pre-market activities as a valid regulatory use of the rubric. Per the official guide:

“We should discourage all pre-market regulatory reliance on the tool for the time being”

Case Studies: Applying the medical device rubric for CVSS

After studying the official rubric for CVSS (MITRE) and analyzing the structured questions, Deep Armor applied the tool on two real-world vulnerabilities. These issues were discovered in market products classified as Class 1/Class 2 medical devices. We do not disclose any vulnerabilities in this blog.

Deep Armor had already scored the two vulnerabilities using the CVSS 3.1 standard before learning about the rubric. We wanted to use the rubric to see if changed our responses and our justifications for the CVSS score & metric values already reported.

After using the rubric, we noticed a minor deflection in the CVSS vector and no change in the CVSS rating for one of the vulnerabilities. Using the rubric bumped up the rating (via a slightly larger change in the score and vector) for the other vulnerability.

Before comparing the two scoring systems, we present a brief description of the affected product and its components.

Product

The product is a smart wearable with in-built sensors that track the wearer’s vitals as well as activities such as walking, running, idle time, and more. The product ecosystem consists of cloud, mobile and hardware components. The device uses the “Bluetooth Low Energy” (BLE) communication protocol to connect to a companion phone application running on the user’s mobile phone. When the device is unpaired, it advertises the BLE packets to establish a connection with the manufacturer’s app. The app discovers the device and connects to it. After the connection is established between the device and the companion app, the latter is responsible for sending notifications, receiving sensor values, performing firmware updates, and a host of other actions.

Issue 1: BLE injection from a malware application on the user’s phone

After a successful connection between the phone and the smart device, a malware application installed on the same mobile, and with Bluetooth permissions, can impersonate the legitimate app and inject arbitrary commands including device reboot, firmware update, fraud notifications, etc. to the device over the BLE network.

Applying the CVSS 3.1 system to this vulnerability generates the score of 7.9, whereas using the rubric CVSS framework results in a score of 7.8. Below is the outcome of the “Rubric Answer Form (Scorecard)” and the comparison table between CVSS 3.1 and the rubric method:

Issue 2: Denial of Service (DoS) via rogue applications on attacker-controlled mobile devices

An unpaired smart medical device can be discovered by any Bluetooth device in its vicinity. This opens a window for an attacker to use his/her phone to perform a “BLE connect” to such a device using a custom-developed malware app. When connected to such a malware app, the smart medical device cannot seamlessly pair with the legitimate application. This prevents a legitimate user from connecting to the device and causes a denial of service (DoS) condition.

Using the CVSS 3.1 standard resulted in a score of 3.1 and a ‘Low’ severity rating. However, applying the rubric bumped up the score to 4.0 and a ‘Medium’ severity rating. Questions in the ‘Attack Complexity’ section of the rubric made us rethink some of our past justifications and led us to change the complexity from a “High” to a “Low”. We also modified the “Attack Vector” from “Adjacent” to “Local” — the rubric clearly calls out a Bluetooth LE based attacker as “Local” (due to the distance limitations) and a Bluetooth Classic based attacker as “Adjacent”. While this particular change should’ve lowered the CVSS score further, the change in the “Attack Complexity” metric had a bigger impact and pulled up the score by 0.9 units.

Screenshots of the “Rubric Answer Form (Scorecard)” and comparison table between CVSS 3.1 and the medical devices rubric is as follows:

Conclusion

The CVSS is a gold standard for rating severity of security vulnerabilities in the industry today. CVSS scores and ratings influence business decisions on when and whether to fix a security vulnerability. The score is also a critical data point in security advisories and public disclosures. Calculating an accurate CVSS score for a security vulnerability, therefore, is a responsible task.

Security risk and vulnerability ratings are subjective things. Affected parties may often mitigate some of the published risks using workarounds and additional security methods, and may argue that the CVSS ratings are not accurate. To a certain extent, the CVSS temporal and environmental scores allow businesses to custom-rate vulnerabilities for their own use cases and deployments, but these sections are defined so that they’re applicable to a broad range of products.

The FDA approved Cybersecurity MDDT (rubric for CVSS 3.0) shows promise as a tool for a specific purpose. By establishing a common language for the healthcare sector, the tool may be useful in communicating the impact of public vulnerabilities clearly to the FDA, to business partners and customers in case of public security incidents. The tool may not aid so much in threat modeling and risk analysis. In fact, it is discouraged to use it for pre-market activities.

Using an existing CVSS standard (the 3.0 revision) and customizing it for medical devices reduces the learning curve for experienced analysts, while increasing the value of CVSS. It is our opinion that the current early version of the rubric (v 0.12.04) is a great first step. Using the rubric for all classes of medical devices over a period of time should help in fine-tuning it before it’s accepted as a standard.

The rubric guidance v 0.12.04 (current) is available at:

https://www.mitre.org/sites/default/files/publications/pr-18-2208-rubric-for-applying-cvss-to-medical-devices.pdf

See also: https://www.mitre.org/publications/technical-papers/rubric-for-applying-cvss-to-medical-devices.

--

--

No responses yet