Using Explainable AI to Improve Personalized Drug Response Prediction

DarwinAI
DarwinAI

--

By: Dr. Alexander Wong

Since its inception, the pharmaceutical industry has invested considerable energy towards the acute problem of Cancer treatment. One of the greatest challenges is identifying what type of drug therapy is most effective for a specific cancer patient, as the therapeutic response to a particular drug can vary greatly from patient to patient due to the high diversity in disease characteristics.

Through our collaboration with enterprises and our research work on medical AI, we’ve discovered that our unique Explainable AI (XAI) gives practitioners valuable insights into enhancing drug response prediction in cancer patients in the healthcare space, and it does so in a way that is transparent, responsible, and trustworthy. We are proud to be devising new XAI solutions that will help pharmaceutical and medical enterprises get lifesaving drugs into the hands of patients in a rapid and timely manner.

Promising research advancements done to date

In recent years, a very promising direction in personalized cancer treatment is the use of genomics to predict drug response in a cancer patient based on the patient’s gene expression profile. By matching molecular disease signatures with the drug treatments that induce the strongest response, you can greatly improve the chance of therapeutic success.

Despite the promise of genomics-driven drug response prediction, it has been difficult to widely adopt personalized cancer treatment because there are very few preclinical models. These models are extremely challenging to develop given the significant interdependencies between drug compounds and the disease’s individual genetic makeup. Modeling these complex relationships is ideal for deep learning, and recent studies have shown breakthrough results in the creation of deep learning-based preclinical models for predicting the drug response to therapy in cancer patients. For example, Sakellapolous et al. published a paper in Cell Report entitled “A Deep Learning Framework for Predicting Response to Therapy in Cancer” demonstrating dramatic improvements over traditional machine learning algorithms in determining the best drug therapy for a patient’s specific type of cancer based on gene expression.

DarwinAI Takes New XAI Research a Step Further

We wanted to leverage our core technology to uncover new insights into how deep learning-driven preclinical predictive models behave in order to improve personalized drug response prediction.

We also wanted to uncover which gene expression features are the most predictive of drug response and which hinder the model’s performance.

Overview of DarwinAI’s process

We trained a deep learning-based preclinical prediction model using the OCCAMS Cisplatin dataset to predict drug response based on gene expression data. We then used our unique quantitative XAI to gain deeper insights into which gene expression features the model used to make its prediction. Our XAI technology has two unique ways to quantitatively use this understanding about the drug response prediction model:

  • Patient level — provides insights into the most predictive gene expression features for predicting the effectiveness of a drug for a given patient.
  • Model level — provides insights into the gene expression features that are most predictive of the effectiveness of a drug in general across all cancer patient cohorts. Of particular interest is model-level explainability for improving personalized drug response prediction, as it allows us to discover both the gene expression features most predictive of drug effectiveness AND the most detrimental gene expression features that hinder the predictive performance of a preclinical predictive model.

Here is an illustration of the 50 most predictive gene expression features (top) and the 50 features that are most likely to cause an incorrect prediction (bottom) among the ~15,000 gene expression features input to the deep learning-based preclinical predictive model. Our XAI technology allows a quantitative perspective into the behaviour of the model, with high positive impact scores being the most predictive and negative impact scores indicating performance hindrance.

The top 50 gene expression features identified to be most predictive of a patient’s response to a particular drug therapy have very high quantitative impact, indicating that building a predictive model based on these features can yield great predictive performance. More interesting, the bottom 50 gene expression features identified have very negative quantitative impact, indicating that removing these gene expression features can lead to even greater predictive performance.

By illuminating how therapies work with specific cancers, researchers may be able to develop more effective drug therapies and deliver them to patients in a more timely manner saving countless lives.

It is our hope that this analysis provides a useful overview of how Explainable AI not only mitigates the ‘black box’ problem inherent in deep learning, but also how tasks such as drug response prediction in cancer patients can be designed in a transparent, responsible, and trustworthy manner using the technology.

About the Author

Alexander Wong, P.Eng. is currently the Chief Scientist at DarwinAI, a Canada Research Chair in the area of Artificial Intelligence, a Member of the College of the Royal Society of Canada, co-director of the Vision and Image Processing Research Group, and an associate professor in the Department of Systems Design Engineering at the University of Waterloo.

His area of expertise is in scalable and explainable deep learning, an inventor of Generative Synthesis, Evolutionary Synthesis, and Random Graphical Models, and has published over 600 refereed journal and conference papers, as well as patents.

He has received numerous awards including three Outstanding Performance Awards, a Distinguished Performance Award, an Engineering Research Excellence Award, a Sandford Fleming Teaching Excellence Award, an Early Researcher Award from the Ministry of Economic Development and Innovation, a Best Paper Award at the NIPS Workshop on NIPS Workshop on Transparent and Interpretable Machine Learning (2017), a Best Paper Award at the NIPS Workshop on Efficient Methods for Deep Neural Networks (2016), two Best Paper Awards by the Canadian Image Processing and Pattern Recognition Society (CIPPRS) (2009 and 2014), a Distinguished Paper Award by the Society of Information Display (2015), and four Best Paper Awards for the Conference of Computer Vision and Intelligent Systems (CVIS) (2015, 2017, 2018, 2019).

DarwinAI, the explainable AI company, enables enterprises to build AI they can trust. DarwinAI’s solutions have been leveraged in a variety of enterprise contexts, including in advanced manufacturing and industrial automation. Within healthcare, DarwinAI’s technology resulted in the development of Covid-Net, an open source system to diagnose Covid-19 via chest x-rays.

To learn more, visit darwinai.com or follow @DarwinAI on Twitter.

If you liked this blog post, click the 👏 below so other people will see this on Medium. For more insights from our team, follow our publication and @DarwinAI. (Plus, subscribe to our letters if you’d like to hear from us!)

--

--

DarwinAI
DarwinAI

For enterprises that are ready to adopt trustworthy AI at scale. Visit our publication at https://medium.com/darwinai, and our website at http://darwinai.com.