Healthcare Security at Enigma 2016
Enigma is a security conference launched this year by the USENIX association. Here are a few notes I jotted down during talks by Avi Rubin and Kevin Fu covering the question of healthcare security.
Hacking Health (Avi Rubin)
Avi Rubin was first exposed to security evaluation through an investigation commissioned by the Federal Trade Commission. He was charged with evaluating security within a rental car company and found many poor practices — including lack of authentication or encryption — notably facilitating access to credit card numbers.
Rubin eventually realized that the healthcare sector exhibits one of the worst security landscape. Starting in 2009, he performed an IT tour of several hospitals. His description of the cybersecurity landscape of the healthcare industry is alarming: from drug dispensers vulnerable to denial of service attacks to medical devices with open ports providing root access. He emphasizes the sector’s unique set of stakeholders:
- Doctors: implementing security measures should not impact the clinical process. Doctors are in charge and will not accept solutions affecting their way of working with patients.
- Patients: like each of us, patient do not follow security instructions.
- Healthcare is a heavily regulated sector, but regulators — despite their best intentions — do not always understand the implications of security.
- Insurance companies seek to minimize the amount of money spent.
- Mobile device makers need to go through an extensive FDA approval process before putting products on the market. This can slow down the availability of security patches.
Furthermore, healthcare systems (e.g., radiation dosage, medication dosage in infusion pumps, stocking of supplies in intensive care units, shifts of doctors and nurses, health records, drug dispensing robots) heavily rely on software, thus heavily exposing them to attackers.
As healthcare applications are increasingly connected to patients (through mobile devices) and cloud storage providers, new problems emerge such as key distribution to allow for patients to decrypt and read their medical information or side channel vulnerabilities exposing medical data stored in the cloud on behalf of hospitals.
In his talk, Rubin outlined the following working challenges to improve healthcare security in the short and long term:
- Application whitelisting on medical devices
- Security hygiene for backend systems
- Database Activity monitoring: anomalous queries should be flagged and reported (e.g., querying thousands of patient records)
- Multi-factor authentication and virtualization for remote access of medical systems: doctors should not be accessing medical information from personal machines also used by their kids to play games.
- Universal encryption of data
- Terms of agreement with cloud service providers: create a standard for legal agreements between health providers and cloud providers hosting confidential patient medical information.
- Automated support for security in chart accesses: logging should be more systematic to allow for more precise security analysis.
- Privacy for self-identify data (e.g. a patient’s genome sequence)
- Authenticate clinical personnel: in some hospitals, all employee currently have access to the same level of information about patients. Also, bad security hygiene examples include bypassing of password timeout by nurses entering passwords for doctors. This goes back to the requirement of security not impacting the clinical process.
- Usability of security: as population ages, it is of paramount importance (e.g., it is hard for older persons to remember passwords).
This talk highlighted the importance of healthcare security as it affects pretty much everyone. Even though threat actors are not always properly identified, they can include terrorists and state sponsored activists…
Medical Device Security (Kevin Fu)
This talk by Kevin Fu put an emphasis on medical devices within the broader question of healthcare security.
For several years, wireless communication has helped ensure medical devices implanted in human bodies do not expose their users to severe infection risks. On the other hand, it has exposed these devices to attackers. For instance, some defibrillator came with a wireless monitoring system which allowed patients at home to be alerted in case they needed to be taken to the hospital. However, work by Halperin et al. [1] showed that data transmitted by one of these monitoring devices was not encrypted. This allowed attackers to read medical data, notably leading to privacy concerns. Even more dramatically, this work showed that the defibrillator could be wirelessly forced to induce a fatal heart rhythm.
Hospital devices are also vulnerable as they frequently run outdated operating systems together with medical instruments. For instance, many machines are still running Windows XP — in its original version i.e. without service packs and/or corresponding security patches — because it is too difficult or expensive to obtain patches from manufacturers. Indeed, upgrades can be costly (several millions of dollars) because upgrading operating systems can require changing the medical device instrumentation all together (e.g., the magnet in a MRI). Ironically, there has been instances where upgrades distributed by manufacturers contained malware because their website distributing the upgrade got compromised. This illustrates why it is very hard to upgrade medical systems and why security should instead be hardcoded in systems’ initial design. However, even though manufacturers are aware of this issue, it will take a few years to have these products out and start enjoying their security benefits.
Fu concluded his talk by emphasizing that the major foreseeable risk is to have attackers craft denial of service attacks leading to wide-scale unavailability of patient care. The integrity of medical sensors (as software is not designed to run with malware on the operating system) is also key. As discussed in the talk by Avi Rubin, one of the main challenges in securing the healthcare system is that security solutions cannot interrupt clinical workflows. Solutions need to leave the way doctors interact with patients unaffected. Sometimes, it is better to accept the risk of a cyberattack than deploying a security solution that will affect patients’ health.
References:
[1] Halperin, Daniel, et al. “Pacemakers and implantable cardiac defibrillators: Software radio attacks and zero-power defenses.” IEEE Symposium on Security and Privacy, 2008.
Please leave any comments you may have below or reach out to me on Twitter at https://twitter.com/NicolasPapernot