The things we could do with Medical Records, Part I: Computer Radiologists

Chest x-rays make up 44% of the nearly 200 million x-rays done every year in the US. Each of those 80+ million images are analyzed and interpreted by a trained radiologist. That interpretation is recorded into a plain text note by the radiologist, and the pair are then added to a patient’s record.

The image is a fairly high resolution grayscale image, with what is essentially HDR data (a larger range of blacks and whites than a traditional screen can show). The report varies in format, but at some point has an “interpretation” that would read something like this:

INTERPRETATION: There has been interval development of a moderate left-sided pneumothorax with near complete collapse of the left upper lobe. The lower lobe appears aerated. There is stable, diffuse, bilateral interstitial thickening with no definite acute air space consolidation. The heart and pulmonary vascularity are within normal limits. Left-sided port is seen with Groshong tip at the SVC/RA junction. No evidence for acute fracture, malalignment, or dislocation.

Given the enormous wealth of pre-existing x-rays, the attached report actually describing the images, and the fact that neither the interpretation nor the image contain protected health information (if properly de-identified), this is an enormous opportunity for a machine learning algorithm to be trained. A “smart” chest x-ray could highlight potential issues, could detect changes too subtle for the human eye to pick up on easily, and would be a great way to avoid the tunnel vision that so frequently afflicts us when dealing with patients.

This post is part of a series of explorations of what could happen in the EMR space done as an independent project at Cornell Tech in 2016. 
part 2, part 3, part 4

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.