Why the cloud won’t solve your interoperability challenges

Alastair Allen
6 min readDec 15, 2022

--

The cloud is a game-changer. It’s not just a buzzword or a trend — it’s a revolutionary technology that is changing the way we work, live, and play. Within healthcare, the cloud has the potential to be equally transformative, by improving access to medical records, facilitating remote patient monitoring, and enabling data-driven decision making.

“Healthcare in the cloud” — created by DALL-E

It is also huge. The big vendors like Amazon, Microsoft, and Google, each provide massively scalable platforms with 100’s of major cloud products and services. They continue to innovate and invest heavily, with countless new announcements presented every year at their respective keynotes.

Their offerings vary from the more traditional Infrastructure as Code model (where you essentially run your software on someone else’s computers) all the way through to the next generation of serverless computing (where you no longer really think about computers).

On top of this, most of the big cloud vendors now provide industry focussed solutions. Microsoft for example, is leading the way here with their Microsoft Cloud for Healthcare, which provides trusted, integrated capabilities for a range of different healthcare use cases.

This choice is great, but with such an array of services to choose from and the constant exposure to the latest innovation it can often result in an implicit association between the cloud and success. This can lead decision makers to be overly optimistic about the potential of the cloud to drive successful outcomes.

Typically, this happens when projects are viewed as a technology first initiative. In contrast, where organisations start with the problem and work back to the technology, they typically have a more explicit association with success.

I was reminded of this earlier in the week when I listened to a conversation where an implicit association between moving to the cloud and improved interoperability was being discussed.

Unfortunately, it is not as simple as that. If it was that simple, I suspect most of healthcare would now be running in the cloud. Of course, you can move to the cloud and leverage cloud services to improve your interoperability posture, but unfortunately:

Simply moving to the cloud will not solve your interoperability challenges

To expand on this, let me define what I mean by interoperability.

Typically, people associate this term with moving data between systems using APIs or messages. Each application that communicates data to or from another application will usually need to translate or map it into another format so it can be correctly read and understood. In the short term this sometimes works, but there are many examples of healthcare organisations who have failed when trying to stitch together an entire landscape of applications like this.

My view on interoperability is that we need stop with all this plumbing. We need to separate the data layer from the application layer and in doing so establish a common data language that is used by all the applications. It doesn’t matter if the data is stored in one place or federated across many locations (sometimes referred to as centralised or decentralised architectures). The important bit is all the applications communicate using the same set of open data models.

In this environment, interoperability becomes a characteristic of the overall healthcare system, as opposed to a collection of brittle mappings between applications.

If the cloud can’t help, how can we do this then?

Healthcare is semantically very complex. If we want to manage health data sustainably, we need an architecture that can deal with this complexity. Ideally this would be achieved using a combination of open standards, including those shown below.

Standard terminologies (like SNOMED CT and LOINC) are an important component, but to deliver a sustainable system at scale, for the lifetime of the patient, they need to be supported with other standards, including:

  • openEHR for modelling and long-term persistence of data. openEHR’s proven library of opensource, re-usable clinical models provide a great foundation to build on
  • FHIR for data exchange between the core data layer and the wider system
  • OMOP CDM for making this data available for research and secondary use

I have written at length about many of these topics, so check them out for a deeper dive — see Why openEHR is Eating Healthcare and FHIR + openEHR 2022.

Now, you may be wondering how the cloud fits into all of this

As I outlined at the start, I am a huge advocate of the cloud. It is a revolutionary technology that will continue to transform healthcare. You only have to look back at what happened during COVID to see how the cloud made it possible for many healthcare providers to continue to deliver front line services.

Clinicians working at home could more easily access applications remotely. Tele-health software allowed remote consultations to become the default model overnight. Virtual COVID wards were established with remote patient monitoring to manage patients in their home. Much of this was powered by the cloud.

But we also seen the challenges that needed to be overcome because of the fragmented or proprietary state of the data. Huge data engineering initiatives were established in many countries, just to make sense of a relatively simple COVID dataset.

In contrast, those countries who had established an open data platform within their architecture where able to respond quickly. In Slovenia, where the national infrastructure is based on openEHR and IHE document-based exchange, a COVID-19 screening service was delivered in 14 days.

Can the “health clouds” from Amazon, Microsoft and Google help?

The short answer is yes, but only if you invest in using their services to evolve your architecture through data standardisation.

If you simply “lift and shift” your application from on-premises into the cloud, you will have done nothing to improve interoperability. You may have improved your security posture, or reduced costs, or made it easier for clinicians and patients to access the application — which are all fantastic outcomes and reasons for cloud adoption — but you will not have made any impact on your interoperability situation.

If, however, you see the cloud as an opportunity to modernise your stack and adopt some cloud native services to enable a data standards architecture, then we are moving in the right direction.

In this model all the cloud vendors have something different to offer for healthcare customers, but out of them all, Microsoft appear to be the one who understands interoperability the most. The work they have done with FHIR and OMOP and their recent announcement on joining the openEHR foundation show they are committed to ensuring that if customers do move to the cloud, they will support them in delivering an interoperable healthcare system.

Finally, one big advantage of the cloud is the services available to help you get the most out of your data once it is in the cloud. The data analytics and AI services, together with the security and governance capabilities will enbale Trusted Research Environments to be established, analytics to be performed and AI models to be trained. Of course, having your clinical data in an open format that is easily understood and computable is a foundational piece to this.

So, to sum it all up, the cloud by itself wont solve your interoperability challenges. But if seen as an enabler, alongside an open data architecture it is going to really help.

Interoperability, enabled by the cloud.

--

--

Alastair Allen

Football fan and Partner at EY | Board Member @openEHR_UK