Introducing Q&Aid — Winner of PyTorch Summer Hackathon

Bogdan Cebere
Dec 13, 2020 · 5 min read

Authors: Tudor Cebere, Andrei Manolache, Horia Ion, Bogdan Cebere.

Q&Aid is the healthcare assistant project concept that aims at democratizing access to high-quality diagnoses. It has the potential of comforting patients, unburdening doctors, and generating trust by building a lifelike doctor-patient relationship.

Q&Aid won 1st prize at the Global PyTorch Summer Hackathon 2020, Web/Mobile section. The code is open-source and available here.

Inspiration

The year 2020 brought changes worldwide, imposing new standards when it comes to protecting everybody around you. In this time of need, one of the most harmed and overloaded institutions is the hospital, which is the first line of defense against the pandemic and a desperately needed place for some of us with critical medical needs. With hospitals being filled so that those in need cannot go to the doctor, we thought a smart healthcare assistant might help.

What it does

Q&Aid provides a concept on how to address the healthcare institution overload problem by:

Q&Aid is a conversational agent that relies on a series of machine learning models to filter, label, and answer medical questions, based on a provided image as further described. The transcript can then be forwarded to the closest hospitals and the patient will be contacted by one of them to make an appointment.

Each hospital nearby has its models trained on private data that fine-tunes a visual question answering (VQA) model and other models based on available data (e.g., brain anomaly segmentation). The solution will aggregate all of the tasks these hospitals can do into a single chat app, offering the user results and features from all nearby hospitals. When the chat ends, the transcript is forwarded to each hospital, a doctor being in charge of the final decision.

High-level overview of Q&Aid’s analysis stages.

Q&Aid can simplify the hospital logic backend by standardizing it to a Health Intel Provider (HIP). A HIP is a collection of models trained on local data that receives a text and visual input, afterwards filtering, labeling and feeding the data to the right models and generating at the end output for the aggregator. Any hospital is identified as a HIP holding custom models and labeling based on its knowledge.

How we built it

There are three sections of the project that are worth mentioning:

Q-Aid-App

Without any mobile development experience, we were looking for a solution to build an authenticated application that works on Android and iOS.

We chose React Native and AWS Amplify as solutions because of the plethora of tutorials and examples.

We used this tutorial for the development setup, and we followed the AWS Amplify tutorials here for creating an authenticated application.

The chat component is based on the awesome GiftedChat.

Q-Aid-Core

Built using FastAPI, the component creates a bridge between the mobile application and the Q-Aid-Models.

The API contains the following paths:

We built and pushed a Docker image using this Dockerfile, to ease the AWS deployment.

The final step, the AWS deployment, was inspired by this tutorial, and resulted in a set of scripts that deployed the full cloud infrastructure.

Q-Aid-Models

Visual Question Answering is a challenging task for modern Machine Learning. It requires an AI system that can understand both text and language, such that it can answer text-based questions given the visual context (an image, CT scan, MRI scan, etc.).

Reference: https://github.com/aioz-ai/MICCAI19-MedVQA

Our VQA engine is based on MedVQA, a state-of-the-art model trained on medical images and questions, using Meta-Learning and a Convolutional Autoencoder for representation extraction, as presented here.

Medical segmentation is the task of highlighting a region or a set of regions with a specific property. While this task is mostly solved in the general-purpose setup, in the medical scene this task is quite hard because of the difficulty of the problem, humans having a bigger error rate when highlighting abnormalities in the brain and the lack of data.

Our model uses an UNet architecture, a residual network based on downsampling and upsampling that has good performances on the localization of different features, as presented in the PyTorch hub, thanks to the work of Mateusz Buda.

Medical labeling is the task of choosing what kind of image the user is feeding into the app. So far, possible labels are brain, chest, breast, eyes, heart, elbow, forearm, hand, humerus, shoulder, wrist. Currently, our VQA model has support only for brain and chest, but we are working on adding support to multiple labels.

Our model uses a Densenet121 architecture from the torchvision module, the architecture having been proved suitable for medical imagery by projects like MONAI that uses it extensively.

Medical filtering is the task of labeling images in two sets, medical and non-medical, as we want to filter all non-medical data before being fed into the other machine learning models.

Our model uses a Densenet121 architecture from the torchvision module.

Datasets

The datasets used in this project are the augmented version of:

What’s next

Q&Aid has several tracks for its future:

References

Acknowledgments

We thank Cătălina Albișteanu for providing valuable feedback and suggestions.

And congratulations to the PyTorch team for organizing a fantastic competition.

PyTorch

An open source machine learning framework that accelerates…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store