For Responsible AI; Explainable AI is not just Necessary, it is now also Cool..

Ankur Teredesai
Nov 6 · 7 min read

Why explanations will make AI systems more responsible.

The field of Artificial Intelligence (AI) is over sixty years old. Yet, it is only in the last decade that we see AI systems being increasingly intertwined with the social fabric. AI now enables us all. This has tremendous promise of a global societal impact [6] . What then are the main considerations to put such systems into mass production?

99% of AI systems rely on computer programs that learn by examples, popularly known as Supervised Learning as part of Machine Learning (ML). Techniques such as Neural Networks, Deep Learning, Decision Trees are a few examples of such supervised learning algorithms, that learn from data. How do we make such AI systems that use Machine Learning models responsible?Why is there an urgency now to make AI systems more accountable and regulated? Hundreds of examples of such need are now emergent, evident and have made their way in popular press: from self-driving cars, to potential uses in making healthcare AI robust, to ensuring a bias free credit rating prediction, and more critically in ensuring that our criminal justice system is fair. Research today suggests that one of the most important ways to increase eventual accountability is to at least first ensure that Machine Learning models are explainable. Explainability or Interpretability, two terms used interchangeably; are not sufficient, but widely considered necessary for Responsible AI systems.

Explanation is now regarded as a core component of the Machine Learning Workflows for highly regulated industries and for most societal use cases that are close to human interaction.

First, a moment for Celebration…

On October 10th, 2019, a Gartner Cool Vendor independent study was released to evaluate why and who is making big strides in the world of commercial AI systems focusing on Explainable ML [7]. When the news first reached us, I was ecstatic to hear that such a study had been conducted. A leading analyst firm focusing energy and resources on this topic that was considered mainly an academic computer science research area until now, is a wonderful testament that Explainability of AI is now becoming a mainstream requirement. The report titled “Cool Vendors in AI Governance and Ethical Response”, featured five emerging vendors, that Gartner recommended data and analytics leaders should watch for, to help enterprise customers better govern their AI solutions. When I recently highlighted the need for Responsible AI by focusing on context rather than size of our datasets at the ACM KDD 2019 conference Opening Session [5] in August 2019, I had no idea the topic was already being analyzed by Gartner.

What makes this even more exciting was that KenSci, and our Healthcare AI platform is one among five companies recognized in this study. Now that I think back, there are several success factors that played a critical role that led to KenSci being considered a leader in this field. This is a proud moment, not just for KenSci, but for the entire AI and ML community and particularly for all the academic as well as industry research groups that deeply care and work tirelessly in making AI more responsible. I’d like to specifically call out a few factors that helped shape this journey, though much work remains ahead of us:

1) Focus on Assistive not Artificial Intelligence: I’ve been a proponent of human in the loop AI for several years now [2]. Kensci’s intense focus on research in explainable models of AI for healthcare is driven by need to make our solutions more assistive. It is what makes platforms like KenSci trustable, accountable, and useful to healthcare systems across the world. Everyone at KenSci from clinicians to engineers, from data scientists to our academic collaborators are dedicated to enabling explanations to accompany our predictions. Allowing the user to understand the reasoning behind the predictions are central. This makes the strongest case for moving healthcare AI away from blackbox models to a more open ecosystem.

2) Our customers are our partners: AI cannot be implement in isolation. We engage in a spirit of collaboration with our customers and partners everyday. It helps us evolve the KenSci platform and tailor it specifically to solve real world problems in healthcare across the risk and cost spectrum. Our customers are our biggest critics, and our fairest appraisers, and it is only with their constant feedback requiring that ML is not a magic black-box, but a working living system, that we have been able to scale the KenSci platform. Demanding openness and explanations is central to ensuring AI serves the customer use-cases that helps them in-turn better serve their customers — the patients.

3) One Team One Dream!: I’m surrounded by brilliant and hardworking data scientists, clinicians and engineers that bring this vision of Responsible AI to life everyday. Together we are working on some of the hardest problems in computer science. Groundbreaking innovations happen when an amazing, inspiring, dedicated, and relentless team of individuals comes together in their pursuit of helping healthcare organizations operationalize AI and data science. It’s an absolute honor to work with each of you!

Towards betterment of operationalization of AI

While steady strides are being made by healthcare organizations in developing a robust AI strategy, there are still fundamental underlying challenges that impede its successful operationalization. For healthcare clinician engagement and efficacy proofs aside, explainable AI is core to solving some of these issues. Together we’re addressing many such issues that might help health systems trust and engage with AI predictions in a more reliable and efficient manner. Explainability and Interpretability are cornerstones of bringing a sense of trust to AI as I will highlight below when I discuss 7 pillars of Explainable AI. For example, imputation often results in a number of issues and if such issues are not surfaced to the end user, then acting upon explanations from such models can lead to dire consequence [1]. Organizations are looking to accelerate their ML deployment, and efforts from KenSci such as a healthcare aware ML runtime that provides explanations for both population as well as individual level helps accelerate this journey. Other industry leaders are also developing interesting toolkits [3] that are simplifying the process of Machine Learning with features that range from a new interface for a tool that completely automates the process of creating models, to a new no-code visual interface for building, training and deploying models, all the way to hosted notebooks for advanced users.

Another case in point, is the need for correctness in operational use of AI. I’ll illustrate this with a few examples from healthcare. Responsible AI through Explanations addresses three main types of correctness scenarios: (a) Syntactic Correctness — Is the data in the correct format e.g., if the AI data pipeline requires gender denoted for males as ‘m’ and the input data encodes it as ‘1’. (b) Morphological Correctness — Is the data within the range of possible values e.g., a blood pressure of 500 does not make sense. (c) Semantic Correctness — Do the variables actually correspond to what the semantics that are being ascribed to them e.g., a variable which encodes blood pressure as high vs. low will have different semantics for children as compared to adults. As you can observe, just addressing the problem of correctness through explainability can reap huge rewards in production level AI systems.

There are seven main pillars of Explainable AI as described in the figure below:

Explainable ML has 7 main pillars to help us understand how to make it relevant to Responsible AI efforts. Each pillar is in itself needs to progress together to keep pace with the growing expectations for operational use in production systems.

Each pillar plays a pivotal role in making AI systems Responsible in its own way. Traditional ML research being reported at NuerIPS 2019 Vancouver, KDD 2019 Anchorage, AAAI 2020 NYC, ICML 2020 Vienna, FATML 2020 Barcelona, and other ML venues will report significant advances on Trust/Performance & Generalizability pillars. The Human Computation community for example, in its recently concluded conference H-Comp 2019 reported great advances along the ‘Domain Sense’ pillar. These are great avenues to follow along for anyone interested in Explainable AI and I highly encourage continued sponsorship, participation and dialog of these esteemed meetings. Yet much remains to be done across the other pillars and there is a need for dedicated venues of discourse.

We’re still on the first step. Explainability has to be at the heart of the work we do every day. This Gartner recognition comes to us for our dedicated work in Explainable AI for Healthcare. We are deeply invested to help our customer partners realize that AI need not be artificial, but assistive in making decisions. It’s an endorsement to our commitment to transform healthcare.

But it doesn’t end there,

AI can impact millions of lives across the globe and we can’t wait until we help and create a responsible governance and explainable framework for billions to improve their daily lives. We as a community won’t rest until we understand all the dimensions, and have a meaningful discourse around this. We’re constantly learning, we’re always improving. This marathon journey begins at exactly where we are, at the first step: Starting to be considered cool and noteworthy.

Ankur Teredesai, CTO KenSci & Professor, University of Washington Tacoma

P.S. If you’d like to know more about my & KenSci’s work in Explainable AI for healthcare, you can download our research papers here [4].

P.P.S. We’re always on the lookout for smart, talented, driven, highly motivated individuals to join our small yet mission driven team. Drop us a note.

REFERENCES:

1. The Challenge of Imputation in Explainable Artificial Intelligence Models, Muhammad Aurangzeb Ahmad, Carly Eckert, Ankur Teredesai, The IJCAI-19 Workshop on Artificial Intelligence Safety, https://arxiv.org/abs/1907.12669

2. AI making humans fundamental — https://www.geekwire.com/2018/health-tech-podcast-ai-making-humans-fundamental-thing-internet-things/

3. InterpretML: A Unified Framework for Machine Learning Interpretability, Harsha Nori, Samuel Jenkins, Paul Koch, Rich Caruana, https://arxiv.org/abs/1909.09223

4. https://www.kensci.com/explainable-machine-learning/

5. https://www.youtube.com/watch?v=Xcb7k2j5PrU

6. https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html

7. Cool Vendors in Enterprise AI Governance and Ethical Response, Van Baker, Saniye Alaybeyi, Alys Woodward, Svetlana Sicular, Erick Brethenoux, Jim Hare, Published and accessed: 10 October 2019. https://www.gartner.com/en/documents/3970240/cool-vendors-in-enterprise-ai-governance-and-ethical-res

Ankur Teredesai

Written by

Co-Founder: KenSci | Professor: University of Washington Tacoma | Executive Director: UW Center for Data Science | Information Officer: ACM SIGKDD

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade