The things we could do with Medical Records, Part III: Understanding ICU data

If you’ve ever been in an ICU to visit a very sick family member (and hopefully you haven’t) you’ll know that the number of machines, devices, tubes, lines, and drains can be overwhelming. Luckily, ICU physicians and nurses are there to make sense of the chaos, and make sure everything is going well.

A huge part of the ICU nurse’s job though, is data entry. Every number on every screen is recorded, charted, and saved. This is done in a number of ways, none of which make sense in this era of connection and information. As an ICU nurse, my strategy was to photograph each screen, and then walk back to my computer and read the numbers off the photos, typing them back in to the medical record. Other people write numbers on a strip of tape, demarcated for the time that they wrote the numbers down. People use papers, their scrubs, or even just their memory (yikes!). In the worst cases, people just copy and past values from previous hours, or make them up.

The problem is more than data entry

The crazy thing is that most of these devices are producing data every second, but medical records are only recording them every 1 hour. If you think about that, it’s essentially lossy compression, with 99.73% data loss in the compression.

What could we do with all that data if we had it? It’s hard to say. It’s too much data for a human to take in, but it’s not too much to be processed by a computer, it’s not too much to trigger alerts or alarms, and it’s definitely not too much to be analyzed and mined!

Which leads us to the biggest problem — there’s no unified way to get this data into one place. There’s no common API, there’s no backend that collecting all of it. A lot of these devices aren’t even network connected (for some very good reasons, if you ever watched Homeland). But there are ways around it. What if you put a mini camera on every device in a room, and then used computer vision to analyze the screen? All the screens are optimized to be highly readable with large, clear fonts. What could you do with all those data streams?

We won’t know until someone tries it, but as someone who has watched those monitors for years… I bet you could do a lot.

This post is part of a series of explorations of what could happen in the EMR space done as an independent project at Cornell Tech in 2016. 
Read
part 1, part 2, part 4

Like what you read? Give Ben Duchac a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.