Obstacles to Cloud in Healthcare

Why is exchanging health data so hard?

I have had a few friends ask me why we see some strange things in healthcare. For example:

  • Health data is exchanged by fax, CD/DVD, or on a sheet of film using patients as mules
  • It’s more efficient to make you rewrite all your medical history than transfer your records between institutions
  • Data exchange takes a very long time, often several weeks if possible at all

The difficulty exchanging health data doesn’t just have one cause. It’s a complex mix of the regulatory landscape, existing incumbent institutions, and even the economics of procedures and the implications of exposing your “customer data” to competitors.

I won’t go into the non technical issues — that will take a very long time, so for the purposes of this post let’s pretend that everyone magically wants to cooperate and that we all agree we want to exchange health data. What would it technically take to do that?

This is actually really hard for three reasons:

  1. The machines that produce medical data like laboratory equipment, imaging (MRI, X-Ray, CT), electronic health records, monitors pump data into on-premise systems that were never designed to connect to the internet.
  2. Data in these systems can only be queried over un-encrypted protocols like DICOM or HL7.
  3. The networking infrastructure in facilities is not internet ready. Servers don’t have static/public IP addresses, let alone DNS entries, SSL certificates or any of the very basic cloud infrastructure needed to connect.

A system that exchanges data between institutions needs to overcome all the above obstacles. To illustrate how difficult I’ll share my recent experience.

A 40 bed community hospital in a rural area wanted specialist care for their patients, but they didn’t have enough patients to warrant hiring specialist physicians to cover their hospital 24/7. They found a large physician group of specialists based in a major urban area to do serve neuro and radiology patients. My company, MedXT, was brought in to connect the rural hospital to the physician group.

Problem 1: Getting the image data off the on-premise servers

To get the consults started, we needed to get image data from CT Scanners, MRIs, Ultrasounds and X-Rays off the systems at the community hospital and to the cloud so the remote physicians could see the data.

There were several imaging servers in the basement of the hospital, but they didn’t have static IP addresses, nor were they accessible from the internet.

That wasn’t the only problem. Because those systems communicate in an unencrypted file and networking protocol called DICOM. The data would be exposed if it were sent over the internet unencrypted.

Problem 2: Sending results back to the hospital Electronic Health Record

Once the remote physicians have made their diagnosis they need to send back their written reports to yet another system which does not have a static IP address, and speaks yet another unencrypted protocol called HL7.

Solution (this is gross)

To get the data off the network the hospital we installed a secure proxy locally. The proxy had a local IP address and images were sent on the LAN to that local IP and then forwarded securely to our cloud platform over https.

Once results were produced we needed to deliver them into another on-premise system, which was inaccessible to the aforementioned proxy. To do this we had to set up a VPN tunnel to a server on the hospital network and send the results to that system over HL7.

Setting up what appeared to be a simple point-to-point connection actually required installation of a proxy and setup of a VPN. Enabling what sounds like a simple connection required significant time and expertise to set up. Can you imagine if this hospital wanted to connect to 20 or 30 other facilities? It would be a brittle, unstable mess.

Though this setup was less than optimal, it prompted me to think about the future. I’m optimistic that the future will look very different. It will be impossible to connect thousands of facilities by connecting each with a VPN or through a patchwork of proxies shuffling data between institutions.

The future will have some kind of medical data platform that has what we consider to be “consumer grade” technologies like http, REST, SSL and even XMPP based interfaces. A simple on-boarding process that can be set up immediately with low technical expertise will be available. Data will be able to be sent between organizations as easily as email.

This future medical data platform will not only solve the irritating problems we see today, like the difficulty transferring data between institutions. It will allow many “light up” scenarios like:

  • Hyper specialization: Routing complex cases to the top specialist in that field for consults instead of relying on who is locally available.
  • Reduced Administrative costs: Automation programs can operate on the data constantly, coding, sorting, fraud detection, indexing, etc.
  • Increased throughput: In the future nurses, assistants and technologists will scan, sample and examine patients and experts will be behind the digital curtain planning care. This approach can “lever up” the experts allowing them to more efficiently treat patients.
  • Internet of medical devices: Robotic surgery devices, monitoring lab equipment, oncology equipment, orthopedic planning tools etc. today have very long R&D cycles because they are not online and can’t improve in real time. That will change and we’ll see smarter, more sensitive medical devices.
Like what you read? Give Reshma Khilnani a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.