Curai Health Tech
Published in

Curai Health Tech

Research at Curai

A quick overview of our research publications is available here.

We started our company to fundamentally reimagine what it means to truly provide the best healthcare to every human being. This requires transformative solutions that scale medical providers’ abilities as well as lower users’ barrier to entry to care. A natural component to the solution is that of tight seamless integration of AI/ML in the doctor-patient interaction flow that augments and scales medical providers, and not merely as a plug-and-play in an existing workflow. To truly achieve this, there are several open research problems at the intersection of machine learning and healthcare that need to be addressed. That’s why research has been one of the top priorities in the company, from the very beginning.

We also acknowledge that not all research problems can be solved by ourselves: We seek and actively collaborate with the research community through external collaborations and through our research internship program. Our recent external collaborations include partnership with Stanford, MIT, Georgia Tech, Massachusetts General Hospital and Vanderbilt University. We actively publish in top tier conferences to give back to the community and also to have our research vetted.

In this blog post, we describe some of our research work specifically in the areas of clinical decision making and natural language processing, which have been our two main areas of focus because of their connection to healthcare in general and to Curai Health, in particular.

Clinical decision making

In a perfect world, clinical decision making is a complete information game in which the medical provider has all the information about the patient and the entirety of this information is used to draw conclusions regarding diagnosis and care plan. As we described in an earlier blog post, many factors prevent this from happening in the real world. Our hypothesis is that AI diagnostic systems will complement medical providers to solve the complete information game by gathering all the pertinent information from the patients, while at the same time suggesting diagnostic possibilities and treatment plans [1].

For clinical AI/ML solutions to be truly effective, these systems need to be deployable “in-the-wild”. By this we mean that AI/ML systems need to be correct in making the decisions in environments and medical situations that they may have never encountered. If the system is extremely good at thousand conditions; when it’s deployed in the wild, it may need to attend to a patient with that thousand-and-one~th condition that it has never seen before.

While it’s a tall goal to strive for AI systems to perform in ‘in-the-wild’, we can make progress towards this goal by having medical provider-in-the-loop. For this to happen, we need these systems to have two properties. First, AI/ML systems need to be extensible: which means that they automate parts of the workflow in the interaction between the medical provider and patient, and incrementally update itself as more data becomes available or by learning from the provider-in-the loop. This would mean the models become more knowledgeable over time as they interface more. Because of the need to selectively automate, these AI models also need to know what it knows and what it doesn’t so that the system can switch between either automating the task or graciously falling back to the providers. This also builds trust for the providers.

Medical diagnosis

Diagnosis is “a mapping from a patient’s data (normal and abnormal history, physical examination, and laboratory data) to a nosology of disease states”, [3]

Since the 1970s, there has been significant efforts in building AI driven clinical decision support. The early approaches depended extensively on expert systems where medical experts curated knowledge bases and inference algorithms, based on medical studies. These medical studies are costly to run and may be limited in their size and their ability to generalize to the population as a whole, rapidly. Machine learned models for diagnosis hold a lot of promise in terms of scalability and extensibility by leveraging data stored in clinical electronic health records, but do not offer an immediate way to encode prior medical knowledge.

Our earliest research asked the following question: “How do we design extensible machine learned models, from scratch? How can we incorporate medical knowledge encoded in over fifty years of manually curated AI expert systems for medical diagnosis?”

We tackled these questions by treating expert systems as data prior: We used expert systems as a clinical case simulator, and these cases are used as training data for learning a model for diagnosis.

Machine learned (ML) model for differential diagnosis (DDx) by combining simulated data from expert systems and COVID-19 assessments data. The resulting model is COVID-aware and can reason about multiple hypothesis based on the patient symptoms.

Learning from the experts: From expert systems to machine learned diagnosis models
M. Ravuri, A. Kannan, G. Tso, X. Amatriain
Machine Learning for Healthcare (MLHC), 2018

With the COVID-19 pandemic, we showed how we can quickly update these models to take into account our evolving knowledge about COVID-19 by leveraging data from COVID-19 symptom checker updated frequently by practitioners. We show that our approach is able to accurately model new incoming data about COVID-19 while still preserving accuracy on conditions that have been modeled in the past.

COVID-19 in differential diagnosis of online symptom assessments
A Kannan, R Chen, V Venkataraman, G Tso, X Amatriain

One of the biggest challenges in learning a comprehensive diagnosis model for all diseases is lack of access to sufficient data because of HIPAA regulations or because of the rarity (which is good!) of diseases or their phenotypes. According to International classification of diseases (ICD), there are thousands of diseases, while expert systems cover a few hundred diseases.

Long-tailed class distribution Also shown are nearest neighbors to four of the many prototypes learned for select classes using the proposed Prototypical Clustering Network approach. This is illustrative of the huge intra-class variability in the data. For a novel test image, shown at the upper right corner, the model predicts the correct class by measuring weighted similarity to per-class clusters in the embedding space.

We tackled this problem by explicitly posing medical diagnosis as a low-shot learning problem that can handle the long-tail in the data and perform well on classes in both the head and the tail of distribution. We extended prototypical networks to learn local clusters for each disease to handle multiple phenotypes. Once deployed, the model is also easily extensible to new disease classes by leveraging a few clinical cases, labeled by a practitioner on the platform.

Prototypical Clustering Networks for Dermatological Disease Diagnosis
V Prabhu, A Kannan, M Ravuri, M Chablani, D Sontag, X Amatriain
NeurIPS Machine Learning for Health (ML4H), 2018
Machine Learning for Healthcare (MLHC), 2019

In a recent work published at AMIA, we introduced the problem of trade-off between the number of diseases and the accuracy of diagnosis models in user-facing settings. We show that modeling the diseases of interest (eg. commonly seen on a platform), provides much better accuracy than modeling all diseases.

The accuracy vs. coverage trade-off in patient-facing diagnosis models
A Kannan, J Fries, E Kramer, J Chen, N Shah, X Amatriain
AMIA Informatics Summit, 2020

In the subsequent work, we posed the problem of open set medical diagnosis in which the proposed model explicitly models out-of-distribution. When the model predicts a clinical case as belonging to out-of-distribution, it’s indicative of the model to completely hand-off to the provider without making any suggestion. We also show how to model open-set diagnostic ensembles when training data is distributed across several healthcare sites that do not allow data pooling.

Open Set Medical Diagnosis
V Prabhu, A Kannan, G Tso, N Katariya, M Chablani, D Sontag, X Amatriain
NeurIPS Machine Learning for Health (ML4H), 2019
ACM Conference on Health, Inference, and Learning (ACM- CHIL), 2020

Natural language processing

At Curai Health, the primary mode of interaction with the patients/users is text based chat. For patients, this fits into their busy lives as it is affordable and accessible at the moment. For the practitioners on the platform, chat interface improves ability to scale operationally since it can combine synchronous and asynchronous modalities, at the appropriate level of prioritization. Therefore, as we describe in our previous blog post, advancements in natural language understanding is important to our work.

There has been tremendous advancements in NLP (eg, GPT-3, BERT) in recent years. However, there are number of important challenges to be tackled, when we apply NLP to healthcare. This includes imparting medical knowledge into language and overcoming lack of task-specific datasets. We will now describe how we are tackling these challenges.

The rate at which medical questions are asked online far exceeds the capacity of practitioners to answer them. One way to address this problem is to build a retrieval system that can automatically match unanswered questions to answered questions with the same meaning, or mark them as priority for a doctor if no similar answered questions exist. Building such a retrieval system has two main challenges: First is the dataset that encodes nuances in medical knowledge (eg. Can I take aleve during pregnancy? Vs. Can I take aleve after pregnancy? ). Second, we recognize that in healthcare, a simple answer might not always solve the concern (such as the example above) and therefore, the machine learned models for question answering should be able to direct the patient to the providers.

Examples of questions from the users that are matched to questions in our FAQ. Notice how we also suggest discussing their health issue with the providers for certain user questions.

To solve the first problem, in a recent work presented at NeurIPS Machine Learning for Health, 2019, we augment the features of language models using the depth of information that is stored within a medical question-answer pair dataset to provide the needed medical knowledge. We show how pre-training a transformer network on a medical question-answer matching task embeds relevant medical knowledge into the model that it otherwise does not have. We then fine-tune on an in-house (made publicly available) dataset of question-question similarity.

Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs
C McCreery, N Katariya, A Kannan, M Chablani, X Amatriain
KDD, 2020

During this COVID, there is a huge surge in the number of questions from users. We extended the above work to be used in the context of COVID related curated FAQ dataset. We enabled an option on our platform to directly connect the patients with our providers, when needed.

Another area of research we actively investigate is in learning conversational agents for interaction with patients. The task of learning medical dialog agents is hard for many reasons. First, the definition of “task” is subtle: the task of medical diagnosis encompasses the task of identifying the correct diagnosis (over a few thousands ) for a patient. Second, the models need to be extremely precise to uncover all the pertinent information from the patient, by jointly doing medical reasoning while gathering information with the patients.

We took an initial step in this direction by formulating the problem as a conversational assistant to a practitioner wherein the model would suggest responses to the practitioner based on the context of the conversation so far. We formulated this response generation as a discrimination task where the goal is to rank similar responses from response clusters. We show that our approach of clustering contexts and identifying responses describing the context had much better accuracy than learning a full generative model for response generation.

Classification As Decoder: Trading Flexibility For Control In Neural Dialogue
S Shleifer, M Chablani, N Katariya, A Kannan, X Amatriain
NeurIPS Machine Learning for Health (ML4H), 2019

Another important research area is around medical dialogue summarization. Medical dialogue summaries enable effective hand-off of the patient between the practitioners so that they can be used for medical decision making and subsequent follow-ups. For the patient, it also serves as a detailed summary of their conversation with their provider. Understanding a medical conversation between a patient and a provider poses unique natural language understanding challenges since it combines elements of standard open-ended conversation with very domain specific elements that require expertise and medical knowledge.

In a recent work, we extend the pointer generator network by incorporating important properties of medical conversations such as medical knowledge coming from standardized medical ontologies , and also taking into account the affirmations of the patient symptoms

Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures
A Joshi, N Katariya, X Amatrian, A Kannan
EMNLP — Findings, 2020

Example of a medical dialogue and summary generated by our model. Note that the summary captures affirmatives, negatives and medical concepts.

We are working in a time with tremendous opportunity as well as obligation to scale access to care. We envision that technology and AI will play a huge role in this. At the same time, AI systems can not be simply dropped in the existing workflow — they need to be integrated into end-to-end medical care benefiting both patients and providers. At Curai, we are focused on exactly this. Please reach out if this area of research interests you, and interested in being part of our mission.



Curai Tech Blog — Healthcare, AI, and Engineering

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store