Diagnosing problems via an X-Ray is typically done by a doctor. Can a machine be just as accurate? Photo by National Cancer Institute on Unsplash.

Project Overview: Using Computer Vision and NLP to Caption X-Rays

Data cleaning, deep learning, and model deployment to replace radiologists.

--

Automation has been a major driving force of increased efficiency, reliability, and speed across multiple industries, ranging from banking to transportation to agriculture. In this project, we investigate the potential of deep learning models to automate the process of medical image reporting, looking specifically at chest X-ray images.

Developing a deep learning model to generate/support the reporting of findings and impressions from X-ray images would be a highly valuable development since it takes radiologists a significant amount of time to carry out this process for a large number of patients. Depending on the level of correctness achieved by the model, it may also be able to reduce human error, which is especially costly in the medical field.

In this series, we dive deep into the Chest X-Rays Indiana University dataset hosted on Kaggle. Our process has been broken down into the following topics:

The goal of this project is to measure the similarity of machine-predicted captions to actual captions provided by doctors.

The code is hosted and useable at this GitHub repository.

--

--

Alexander Bricken

Here lies an amalgamation of academic essays and life messages. For other pieces go to https://bricken.co