Storing AI predictions in FHIR

Felix Le Chevallier
Lifen.Engineering
Published in
4 min readJan 24, 2019

--

At Lifen, we took a bet 18 months ago, when we decided to migrate our entire data schemas to FHIR after having set up a FHIR API to handle communications with our partners and between our different apps. So far, we’ve been pretty happy with this decision, and we’re seeing more and more people joining in the FHIR community!

Our FHIR server delivers health data to all our internal and external apps with this standard, interpreted through various open source FHIR clients.

Our new, FHIR based, architecture

Up until now, we kept our AI flow separate (the AI would only read FHIR data) as there are no specifications in the standard as to how we should store predictions in a FHIR model. Our AI is responsible for extracting data from unstructured pdfs (consultation notes, discharge summaries, MRI reports) and structure them for client apps. To facilitate the viewing and sharing of these docs, we extract (as of now), the following information:

  • Patient first and last names
  • Patient Birthdate
  • Practitioners and Organization Recipients — which are then matched against our record to enable secure online messaging providers
  • Patient Recipients — (we are working on a way to provide online access to the pdfs to the patient but for now it’s still postal mail)
  • Document Type (Surgical Operation Note, Consultation Note)

Our workflow is based on the Communication Request FHIR concept. Apps and partners create Communication Requests with a Binary file enclosed in a Document Reference. After finding the necessary pieces of information to be able to send this document, we create Communication objects which contain a summary of who it went to and which patient it concerns.

Predicting these pieces of information is inherently an asynchronous process, as the document might need an OCR to extract raw content, then be sent through prediction pipelines and matching strategies, so an event-based workflow is well suited to the issue at hand. We try to keep all this process to less than a couple seconds to be able to integrate our predictions in client facing apps.

We wanted a solution that allowed us to persist predictions as independent objects with its own lifecycle and we found that the FHIR Searchset Bundle could provide a generic way to store predictions for roughly anything we might want to in the future.

So when a new communication request is created, an event is emitted by the server, which then creates an AI job in a queue. The app picks up the job, reads data from the API (Document content with some context), structures the data into separate predictions, and writes the Bundle in the API, which in turn emits an event for potential clients to catch and fetch the predictions.

Event Flow (dotted lines) and Read Writes (full lines) from our AI app

Basically, we can see a set of predictions (a bundle) as a search result for a given resource. Each result comes with a scoring (match-grade) and a matching reason (match-target) that assesses why the entry is in the bundle.

For example, a bundle of predictions for a Document Reference Resource might contain a document type with a high confidence, and a subject (patient) with a less high confidence. Each one is modelled as an entry in the bundle, and scored with the official Bundle.entry.search match-grade extension, which has 4 possible values:

certainly-not | possible | probable | certain

We added an extension to Bundle.entry.search, called match-target. It describes with a FhirPath what is being predicted — here it would be DocumentReference.type and DocumentReference.subject

By default, bundle entries are resources, so when we are not predicting a full resource as with a patient prediction we put the entire container resource, with the new updated field (as if we patched the resource).

Finally, we link our bundle to the originating resource with an Identifier to be able to search for predictions for this resource. The bundle is uniquely identified by its originating resource.

Putting it all together we’ve got our shiny Prediction Bundle, ready to be read and processed by our apps & partners!

This is our first iteration and it might evolve after a few months of being around, but we’d love to see more AI FHIR Standards discussions in the community as it will surely be a hot topic in the coming years!

Thanks to

, our Interoperability Product Manager and FHIR Guru for her great help on designing this new kind of resource, and to for the implementation!

Want to talk more about FHIR? We started a FHIR meetup in Paris. Oh, we also have a lot of open jobs positions :)

--

--