WannaCry a River

Security and Validation With Internet Connected Medical Devices

Eris Maurer
7 min readJun 2, 2017

It is not a surprise that Microsoft Windows contains security flaws. The only real surprise is how it will be used by those who want to cause harm or profit.

In mid-2017, amonth after Microsoft issued updates for modern operating systems, the WannaCry ransom-ware hit. To sum up: It encrypted files on the computer and if you wanted to decrypt the files, you needed to send Bitcoin to the hackers, who would provide the decryption key. Many of the affected organizations were utilities and hospitals, especially of note was the NHS in England.

Other medical devices are certainly vulnerable to similar attacks.

Despite reports that Windows XP was the main issue, most affected computers were using Windows 7. However, while the updates to Windows 7 were available, they were just not installed in time.

Morris’ Three Golden Rules of Computer Security are as follows:

  1. Do not own a computer.
  2. Do not power it on.
  3. Do not use it.

We are left to move on to ever increasingly insecure systems as we move to Rule 4: Do not connect it to anything.

You can be sure of relatively complete security if you follow the above. However: People Like Cat Pictures. So, human nature dictates we will violate that rule too.

Hackers (white and black and gray hat) and internal testers are all on the lookout for software vulnerabilities. This opens us up to zero-day exploits developers of the systems do not even know about, and it is only a matter of time before those exploits affect a user.

This cat probably caused WannaCry. Probably

We are now to Rule 5: Apply patches promptly. That’s fine for one’s home PC, or for many companies.

But some companies have other policies about prompt patching. But those policies are not due to oversight, ignorance, or blind trust, those policies are very intentional choices, often due to regulatory or even quality control pressures.

In FDA regulated industries, occasionally the lack of updates are by design, because constant updates conflict with the longstanding practice of software validation.

Software validation? Never heard of it? I will cover that in the next section. If you have, feel free to skip ahead.

Validation (or Why Your MRI Runs Windows 95)

The Therac-25 incidents are widely cited as part of the impetus to validate and qualify software, but the idea of ensuring a piece of equipment operates the same way every time seems rather obvious. Now.

The Wikipedia article linked above has a reasonable overview of the incident itself, but to sum up what happened with the Therac-25 is that the programmers reused code and made several other errors, including not having certain safety mechanisms in place. These errors occasionally caused a large radiation dose to patients, which killed a few people.

So, that’s bad. Patient safety is a paramount concern.

How this is prevented now a days, or at least severely mitigated, is that software that affects drug manufacture or medical devices is validated (or tested) in such a way that ensures that it will function the same way every time and that processes are in place to ensure that regular function.

This is done by risk analysis and testing. For the most rigorous kind of testing, each action is checked and re-checked to make sure everything works right. And then the test is repeated with different samples. And if something out of the expected result happens, that needs to be documented, fixed, or justified.

So we are clear, these tests are not on the operating system itself, they are on the software that runs the operating system and how they interact.

It takes a long time, but the increased safety is typically worth it. Additionally, at least in the US under FDA guidance (and in the EU under theirs), it seems to be a good way to stay in business in the pharma and medical device industry.

When validation is done, it is most economical to not do it again unless absolutely necessary. Updating a random driver for a PC that works and was validated to perform a task and has had no trouble is not worth the time, effort, and money.

If IT is focused merely on the increased reliance on digital records and compliance with US 21 CFR Part 11 (or international [rough] equivalents such as European Union’s Annex 11), we can expect digital attackers to shift their focus to things such as medical records in search of personal information to sell or otherwise exploit. Some will move to IoT devices that people use to treat various conditions. In all cases, it is vital that general and mundane updating (e.g., applying a security patch) is not ignored or, at the very least, automated.

We do not want to have to send bitcoin to Russian hackers to ensure our diabetes pumps continue to function.

“Windows will now restart your computer for updates.”

Updating a computer can be a pain. Not in the existential angst kind of way, but the pop up window in the corner of the screen kind of way. Updating a hundred can be even more so. Updating hundreds of specialized pieces of equipment, some of which need risk assessment and possible testing is even more so, and the assessment goes smoothly only when everything goes perfectly.

Most people have experiences where computers update some piece of software and they suddenly stop working. Sometimes there is a conflict and the software someone uses does not function as usual. The conclusions here should be obvious.

If an operating system might get updated, and you want to ensure that the software that might hurt people stays validated and functional, you have three real options:

  1. Update and do not retest.
  2. Retest the software.
  3. Do not update the system.

The first option is too risky; it might cause issues with a batch of drug product or, possibly, hurt someone.

The second option is often prohibitively expensive. It may be required in certain circumstances, but it takes time, money, and a specific field of expertise.

The third option is the most likely. Do not touch it. “If it ain’t broke,” as the saying goes. And if the system is not connected to the Internet, remote attacks are not going to happen.

But what happens when we violate Rules 4 and 5 of the above Golden Rules of Computer Security? Especially on a system that controls life or death actions?

Does My Diabetic Pump Need an IP Address?

As computers evolved to control more and more medical devices, they ended up getting upgraded or designed with off-the-shelf operating systems. While Therac-25 had its own operating systems, many modern devices use operating systems like Windows IoT or related Android or other systems. In fact, Windows IoT’s branding had a recent name change (from Windows Embedded) that belies the connected future of these items. There will be IV drip monitors connected to hospital networks for monitoring at nursing stations. This is inevitable.

But, when security flaws are found in such systems, we have questions to answer:

  1. How will they be patched?
  2. How will those patches be tested?
  3. Will we test them at all?

All of these questions will need to be answered. Because if we will pay $300 in Bitcoin to hackers so we do not lose copies of pictures on our computer, what will we pay when they have control of our insulin pump? We will end up patching these systems.

The “How will they be patched?” and “How will those patches be tested?” are important but outside the scope of what I aim to discuss here. Typically, if a piece of software is upgraded, the qualification is written to amend information in previous versions of the testing. It’s applied either over the air or by inserting media or coding directly on the machine. The methods are all valid and used throughout industry.

I am most interested in question 3: Will we even test them?

“Open Your Tests to Page 1”

The depth of testing is based on a risk analysis, and a risk analysis generally looks at 3 questions:

  1. What can occur?
  2. What is the frequency?
  3. What are the results of it?

If the incident is inconsequential (e.g., a variation on the shade of gray on a printed form), then there is little risk, even if it happens all the time, because it does not result in any loss of data and, ultimately, it has no impact on the end user.

However, if the incident is catastrophic (e.g., patient death or hospitalization), then the risk is weighed more highly, even if that is statistically rare.

While patching a laser hair removal machine might not be the most obvious thing in the world, without interlocks it may be possible to hack into it and cause injury to a patient. Further, the company that owns the laser company might be liable for the risk of not patching it if such an attack was to occur. Even further, they might want to be able to use the laser machine if it is locked down somehow (e.g., WannaCry).

But what is the risk here?

It will ultimately be up to companies who produce these machines to improve security drastically rather than increase the risk to patients (not to mention their bottom line, if they wish to be amoral about it). And by “produce the machines,” that is meant to include companies such as Microsoft and Apple or organizations such as those behind the various Linux distributions that serve as backbone to many of these devices.

The only suggestion I have is to automate the testing and validation process, which itself would require validation. It’s a little hand-wavy, I admit, but the issue of validating and security is very complicated and it will require a series of very good choices and very methodical processes executed very correctly. Very. Indeed.

As a professional in the pharmaceutical field, I wish I were more optimistic about this. But security is incredibly hard, or else we would not have incidents, and security in these devices would be a given. But security is hard. Compliance with regulation is hard. Anyone who says differently is selling something.

--

--

Eris Maurer

Tech writer in pharmaceuticals. No, I do not write the inserts for your drugs; I write the tests, procedures, and reports for the manufacture of your drugs.