FHIR + openEHR — 2022

Alastair Allen
18 min readJun 6, 2022

TL;DR — Check out this link for a two-page summary of this article.

I first started working in healthcare in 2009, having spent almost a decade prior to that in a range of different industries, including fin-tech — although we didn’t call it that then — and the public sector. The thing that struck me most entering healthcare was the complexity of the domain. Everything about it was an order of magnitude more complex than anything I had experienced before — from the clinical workflows that I was trying to digitise through to the compliance and governance of getting a digital solution into production. Oh, and the data. That turned out to be complex too.

A lot of the early projects I worked on utilised HL7 v2, v3 or CDA, but it wasn’t until around 2015 that I started playing with the latest new standard from HL7. One that would be influenced by the lessons of its predecessors to deliver a next-generation standard that would hopefully solve the biggest problem of them all — interoperability.

So, I jumped in and immersed myself in this new world — and it was great. Everything felt so simple — from the well-documented website to the growing collection of open-source tooling. I was able to get a FHIR server up and running in no time.

It was also around this time I had some new projects coming up that felt like a great fit for this emerging standard. So, faster than you could say “interoperability”, I was running a DSTU 2 FHIR Server in production. And guess what? It was great. The initial use-cases were mostly greenfield, were quite simple (by healthcare standards), and (ironically) didn’t involve a lot of data sharing. I wrote a few articles around this time relating to my experiences — including Building and Running a Healthcare Platform in the Cloud.

But these early projects grew beyond their MVP (minimum viable product) into more complex problem spaces. Alongside this, FHIR continued to evolve quickly and very soon a FHIR server running DSTU 2 felt like legacy technology. So, it was around 2017 that I started looking around for other options to address some of the challenges I was starting to see emerge.

So about 20 years after it was invented, I discovered openEHR. At first sight, openEHR appeared to be a more academic version of FHIR, which led me to publish some articles to this effect — including FHIR vs openEHR . But it wasn’t until 2019 that the penny dropped, and I realised that these two standards were not in fact competitive, but instead complimentary — leading me to publish FHIR + openEHR.

This article is intended as a summary of these experiences. Hopefully, it will provide some guidance to people who have use-cases they are considering for FHIR and/or openEHR.

My journey with FHIR and openEHR

What's your problem?

But, before we get into the details of the subject matter, we first need to step back and review where we are in terms of the problems we need to solve and the shape of the health and care system post-COVID-19.

I’ve written in the past about the importance of starting with a problem and working back from user needs (see Digital is Gold, but Data is the New Currency) but still today too many conversations I have start with a discussion on technology or healthcare standards.

Last year, at the HiGHmed Symposium in Berlin, during a great talk on digital transformation Patrik Georgii-Hemming identified this challenge too.

“As a clinician, I don’t care about standards — unless they solve my problem!”

Of course, everyone's individual problem statement is different, but there are a number of important trends that are currently in motion that will shape the technology choices we make going forward.

The graphic below illustrates the evolution of the health and care landscape across three generations.

The evolution of the health and care landscape
  • Generation 1 —A largely paper-based landscape, with some tactical systems used within certain organisations. Data is typically held in a non-standardised way and there is limited sharing of information.
  • Generation 2 —Generation 2 systems are centred around the EHR. These systems are often closed but there is some sharing of information between organisations and care settings. Information is typically shared by moving information around using messages and occasionally APIs.
  • Generation 3 — This sees a shift to a regional model where systems and processes are designed around the citizen — not the organisation or EHR. At the core of this is a move to an open, standardised data record for the citizen. One that can be used in a safe, governed way by applications across all organisations and care settings — irrespective of where the person is.

Exactly where you are on the diagram will vary based on your individual location or circumstance, but broadly speaking I believe we are now entering generation 3.

The job of generation 3 will be to actively support the new and emerging models of care we are starting to see. Here we will see health and care that is delivered closer to the home, with new streams of data from wearables and sensors becoming the status quo. We will see mergers and acquisitions that will impact how organisations are structured and policy changes that will impact how services are planned, paid for and delivered.

Underpinning this we need an evolutionary architecture with a robust data platform that can embrace change in a safe, incremental way. To do this effectively we need the right combination of healthcare standards that will ensure we don't slip back into generation 2 ways of working.

What does the right combination look like?

As I write this in 2022, I think this idea of complementary standards working together has aged well. We now have a general industry consensus that a collection of standards — including FHIR and openEHR — should be combined to solve healthcare problems. The recent Data Governance Playbook from the Open Data Institute is one such example. The graphic below goes further and helps to visualise what a combination of standards looks like.

Healthcare standards combined

It also helps to illustrate an important point — there is overlap between many of these standards. Unfortunately, this overlap also creates a grey, fuzzy zone of uncertainty — especially where the blue and green circles overlap around FHIR.

It is also without doubt the root cause of the question I get asked the most.

“If FHIR can be used for clinical models and persistent data, then why do I need openEHR?”.

The answer, unfortunately, is not a simple black and white one. To quote Linus Torvalds, the famous creator of another open software platform.

“’It depends’ is almost always the right answer in any big question”.

But, what does it depend on?

In many ways, it is no different to the process you would follow when making any important technology decision. Similarly, the choice should always depend on your use case and the problem you want to solve.

In the context of the question we are exploring, the two key factors I believe you need to consider are:

  • How widely do you want to use, re-use and share information?
  • How complex is the problem and the associated data?

This can be visualised on a simple graph. I have left this blank for now but will come back to it later.

Use case analysis

But before we explore the characteristics of our use case, let’s recap what the core value propositions of both FHIR and openEHR are, as they will hopefully help us understand where some of the challenges start to emerge.

This slide I used in my FHIR + openEHR article is a nice simple summary and still stands up today.

FHIR compared with openEHR

Where FHIR has an 80/20 rule designed to be extended or customized for specific use cases, openEHR has a maximal data model designed to be constrained for specific use cases. In many ways, the focus of both standards can be seen as inverted from one another. To illustrate this, I’ve recently started visualising this using the triangles below. This is not to imply the narrow end of a triangle is weak, but rather to illustrate where the focus of each standard lies.

An alternative comparison of FHIR and openEHR

This focus and associated design approach lead to several “trade-offs” that need to be considered when evaluating our “it depends” answer.

Tradeoff analysis is a separate topic, but in their book “Fundamentals of Software Architecture” Mark Richards and Neal Ford define the first law of software architecture as follows:

“The first law of software architecture is everything is a trade-off”.

All our technology choices have trade-offs — ranging from FHIR and openEHR through to the Cloud and AI. However, given the context of the question I am exploring, I will outline five trade-offs if you are considering using FHIR for persistence. This is not intended as a criticism of FHIR, but rather as a reflection of the experience gained through the projects outlined at the start of this article.

What are the FHIR trade-offs for persistent data?

1. Lack of resource maturity leading to compatibility issues

At the time of writing, the current published version of FHIR (v4.0.1) has 145 resources, of which 11 are classified at the Normative Level of stability and implementation readiness.

Normative resources are considered to be stable and locked. Resources that are not normative are typically either classified as Draft or Trial Use. Trial Use resources have been reviewed but have not yet seen widespread production use. In fact, all trial use resources come with a health warning on the HL7 website.

“Future versions of FHIR may make significant changes to Trial Use content that are not compatible with previously published content.”

To put that in perspective, 92% of resources in the current published version of FHIR are Trial Use and come with a compatibility health warning.

Regardless of your use case, this becomes a trade-off that you need to consider. Each time you upgrade to a new version of FHIR you will need to review the changes and assess the impact on your deployment.

However, when using FHIR as a persistence mechanism the overhead of this trade-off becomes significantly greater. Sometimes a change may be minor — such as a name change — but other times it can be a major change where the meaning of the information being modelled has changed.

An example from FHIR R4 is the AllergyIntolerance resource where “assertedDate” was changed to “recordedDate”. Even the change of name in this instance would have resulted in a breaking change, but additionally, in this case, it is also the semantics of what is being modelled that has changed.

These kinds of changes pose significant complexity during a FHIR upgrade — of which there will be many over the coming years — as the task is not only refactoring business logic or migrating/translating the data at rest but additionally ensuring all your API consumers have measures in place to deal with the change.

If your use case is based on a principle that data is for life and as a result, it should be held in a robust, future-proofed format this instability relating to version changes is a trade-off that must be seriously considered.

openEHR too has content that is still draft, but it has been designed to embrace change more easily. The base reference model provides an underlying technical foundation that will consistently support any openEHR implementation, anywhere in the world, at any time in the future. On top of this the clinical archetypes are modelled and then finally templates are used to describe specific use cases. Importantly, only the reference model is implemented in software creating a modelling environment that is more robust and adaptable to change. This means versioning of openEHR artefacts (archetype, template, terminology subset) can be applied to each individual source artefact, rather than the entire repository.

From a maturity perspective, openEHR has been around now for nearly 20 years. At first glance, 20 years may sound like a long time for something important like this to emerge. However, technology does not follow a fixed timeline so we should not compare the first version of something to the first mass version of something. There are many examples of where the time period between these two events has been wide and varied, but the thing they all have in common has been a change in circumstance that has triggered the change. For example, it took almost 30 years for Python to become mainstream — triggered by an explosion in data and AI. Similar examples exist for the internet and smartphones. Healthcare up to now has been delivered inside a generation 1 or generation 2 system which is centred around an organisation or EMR. Only now as we enter generation 3 and need to deliver health and care services that are centred around the patient have we seen a data-first architecture — and the adoption of openEHR — being triggered.

2. Proliferation of profiles leading to reduced interoperability

FHIR extensions and profiles are a necessary, but complicated part of FHIR. FHIR resources need to be extended or profiled when dealing with use-case specific data that is not covered by the core published resource — following the “80/20 rule”. Unless you have a really basic use case a reasonable assumption is that you will require extensions.

This provides great flexibility for implementers, but it also creates a number of challenges. One challenge is this can lead to significant variation in terms of how the FHIR specification is implemented — ranging from those that just use native FHIR resources (with no extensions or profiles) to those that adopt either local or nationally defined profiles.

In a recent article, Thomas Beale made the following observation, which I think summarises it quite nicely:

HL7 FHIR is not a standard, it is a standards building framework.”

What this means is what you create using the FHIR specification is the thing that becomes the standard — i.e the profiles or implementation guides. The challenge is large groups of people have been building their own profiles, and in doing so are creating their own local version of a standard, each of which can define conflicting ways to store the data. In addition, each of these profiles is built on an underlying version of the FHIR specification so are subject to the trade-off analysis outlined under point 1 above.

This is where the design philosophy (when compared with openEHR) is important. openEHR addresses this challenge from the other direction. Instead of having a narrow set of models (i.e. resources) that get extended for each use case (via profiles and implementation guides), it adopts the idea of a “maximal data model” with a large core of mature models (archetypes) that are widely understood. These subsequently get constrained at runtime (via templates). The idea is that the data models can be used and re-used across lots of different use cases — but the underlying model remains the same. With FHIR, the model is essentially what is expressed by the use case (the resource, profile or implementation guide) and as a result, these are typically very different across different settings. Moving to a common set of data models is what will allow us to establish healthcare systems that are genuinely interoperable. While there are also trade-offs with this approach we are starting to see the benefits now being quantified — which I outline later under point 5.

In practice, I have seen this flexibility often lead to a “big ball of mud” anti-pattern, where extensions simply become a convenient “bucket” to quickly drop local changes into, but when combined at a system level are very difficult to understand — similar to the challenges encountered with the Z-Segment in HL7 v2.

When this happens it becomes more difficult to share data across boundaries, as the local use case is often not well governed or understood. When storing data which you intend to use and re-use over a long period of time the challenge becomes exponentially harder.

3. Leaky abstractions leading to reduced interoperability

As a side effect of the so-called “80/20” rule, FHIR is focused on 80% of the common use cases. This has led to a situation where several resources are modelled at a higher level of abstraction from the subject area being modelled and stored.

For anyone not familiar with this idea of abstraction it is essentially a simplification of something much more complicated that is going on under the covers. Unfortunately, healthcare data is complicated and by abstracting it behind simplified models, sometimes this complexity “leaks” through the abstraction and you feel the things that the abstraction isn't able to protect you from. This is another trade-off that you need to consider.

For example, if you take the Observation resource, this is represented as one generic resource in FHIR, but in openEHR each of the different types of observation is modelled around the full spectrum of observation types such as Blood Pressure, Heart Rate, Body Temperature etc. Each openEHR archetype is designed specifically around each individual subject area, allowing specific context and other domain concepts to be accurately modelled. With FHIR I continue to see new and different ways of representing core information such as the vital signs outlined above.

Another example is the collection of questionnaire resources. These resources are intended to be used to solicit information from patients, providers, or other individuals in a healthcare domain. In practice, I have seen many “anti-patterns” emerge — similar to those outlined under “Extensions” — where the questionnaire resources become a convenient bucket that allows information to be technically captured and stored in an FHIR Server but creates barriers when it comes to understanding what this information actually represents.

If you are sharing information outside of your organisational boundaries this is an important trade-off to understand. Additionally, in an environment where data is being stored for long term usage, the trade-off surrounding the lack of precision and context must be properly considered to ensure data is not stored in a format that can’t be used for the lifetime of the patient.

4. Search Restrictions impacting insights

FHIR provides a Search capability, where each resource has a pre-defined set of search parameters that can be used to search for information. If you want to check if there is a search parameter that meets your need, you can check the specification. At the bottom of every page, there is a defined list of search parameters. For example — http://hl7.org/fhir/R4/patient.html#search

Similar to resources, search parameters also follow the 80/20 rule i.e. the pre-defined list of search parameters are those that the FHIR committee who defined the resource believes are the most common. And, in a similar way to Extensions, where you want to search for an attribute that is not pre-defined you need to create a SearchParameter resource to “extend” the search capability of a resource. Depending on how your FHIR Server is implemented you may need to add an index to support this new search capability. Depending on your use case, this could become a complex aspect to manage.

Each individual search parameter is also subject to its own maturity cycle. If you consider the Patient resource referenced above, all 23 search parameters are (currently) classified as Trial Use. Similar guidance to that outlined under point 1 above should be evaluated here also.

Finally, FHIR Search can quickly become complex when dealing with advanced search requirements, e.g., “joins” across resources, composite search (multiple query parameters) or chained search (traversing references in the context of a query). These things can be done, but there are limitations and complications that may represent a trade-off in the context of your use case.

When contrasted with openEHR, there is a native query language called AQL. It is not quite SQL but it provides an interface that is very close and with some training, a person familiar with SQL should be able to pick it up easily to perform a range of search queries — from simple through to more complex.

5. Lack of maturity constraining productivity

The trade-offs outlined above can ultimately combine in a way that can reduce overall development productivity and agility. In addition, as there are (by design) a smaller number of models more time needs to be spent modelling and profiling Resources.

In contrast, there is an international community around openEHR — spanning over 100 countries — where over 1500 clinicians have helped curate over 800 clinical models that are governed and managed by bodies such as the openEHR and Apperta Foundations, using applications such as the openEHR Clinical Knowledge Manager (CKM) to collaborate. This allows more time to be spent with users to understand their needs, allowing use-case specific templates to be created that constrain the openEHR archetypes.

On the projects I have been involved in we typically find that around 80% of existing models can be re-used when we deliver new projects for our customers. Recently, there have been studies (See “openEHR archetype use and reuse within multilingual clinical data sets: a case study” by Heather Leslie) where the amount of archetype re-use has been quantified as between 40–100% for a range of COVID-19 use cases.

Of course, there are still many openEHR models that are still draft and until we get closer to 100% re-use of published archetypes we don’t have true interoperability, but as adoption of openEHR continues and the community grows I believe we will continue to see strong progress in this direction.

Back to our use case

So, if we come back to our use-case and think about these trade-offs in the context of the two key metrics we have identified — complexity of models and the scope of information sharing.

As you can imagine, if you are dealing with simple models that are well defined — such as many administrative resources like Patient, Appointment or Encounter — then FHIR might be a good choice. Equally, if you are only sharing data inside a well-defined boundary then some of the trade-offs I have presented may not impact your solution.

This allows us to establish some zones that can help to guide us when we are considering whether to use FHIR for persistence. I have summarised this in the image below.

Using FHIR for persistence

The further you move to the top right the closer you will approach the implementation of long-term strategic solutions, where the quality, consistency and longevity of the data really matter. In these use cases, I believe FHIR is not the correct solution as a persistence mechanism.

So, what is the answer?

As I outlined in my FHIR + openEHR post, I believe a combination of standards is required in order to help solve the complex problems in healthcare. FHIR has seen huge adoption and is well supported across a host of different apps and wearables. As care continues to be delivered outside of the hospital and closer to the patient, we will see the emergence of vast new streams of data. The answer is not to use openEHR to manage the exchange or transmission of all this data. Equally, the answer is not to use FHIR to store all of it together.

We need to combine FHIR and openEHR together.

The following architectural patterns are the ones I see most often when working with customers.

FHIR + openEHR Architecture Patterns
  • Pattern 1- Facade — A translation component is introduced to translate inbound and outbound FHIR requests to an openEHR repository. Typically alongside this, openEHR APIs will be used to support a complete set of functionality.
  • Pattern 2 — Message Broker — An integration engine acts as a message broker to translate incoming messages from external applications. 2b is shown for completeness but it is not a practical use case I have seen to date.
  • Pattern 3 — Sync Agent — A sync agent exists to copy a defined group of data items between a FHIR and openEHR repository.
  • Pattern 4 — Sharded — FHIR and openEHR exist alongside each other with data being stored in each location based on the nature of the data. We do this at Better where we use FHIR to store much of the administrative data we manage, including patient demographics, encounters and appointments.

The challenge up until now has always been how do you actually build the translations shown above. Typically this has required bespoke software development or mappings within a proprietary integration engine.

The good news this is soon about to change.

At HIMSS Europe this year we will be launching a new product to address these challenges in an open, vendor-neutral and re-usable way. A core focus for us has been how to leverage the tooling and investment that already exists around FHIR and especially to leverage the use-cases where it is being used to connect to and exchange information with different systems (e.g IoMT, DICOM, Legacy data etc). We see FHIR as an important exchange mechanism that helps enable an over-arching vision that data should be for life. I won’t spoil the surprise anymore, but stay tuned for more details.

So, I will close this post, in the same way I did in 2019

FHIR and openEHR are complementary — combine both to create an open, interoperable eco-system where data is long-lived, computable and easily understood.

FHIR + openEHR


  • There are many standards I have not shown or explored in this article e.g 13606. I plan to cover this in future articles.
  • Other supporting components are required to deliver a generation 3 healthcare system e.g clinical vocabularies. These are outside the scope of this article.
  • Thanks to everyone who reviewed this article prior to it being published.



Alastair Allen

Football fan and Partner at EY | Board Member @openEHR_UK