Is Your AI Model Explainable?

Rupa Singh
4 min readAug 12, 2023

--

Why don’t we just trust the AI models and accept the decisions made by the machines, if the machine learning model performs well?

As the AI systems are increasingly proliferating the high stakes domains such as healthcare, finance, aviation, automated driving, manufacturing, law, etc., it becomes even more crucial that these systems must be able to explain their decision to the diverse end-users in a comprehensible manner.

Tech giants like Google, Facebook, Amazon are collecting and analyzing more and more personal data through smartphones, personal assistant devices such as Siri and Alexa, and social media that can predict and model individuals better than other people. There is also a growing demand for explainable, accountable, and transparent AI systems as the tasks with higher sensitivity and social impact are more commonly entrusted to AI services.

Currently, many such AI systems are non-transparent with respect to their working mechanism, and that is the reason they are called Black-Box Models. This black box character establishes severe problems for a number of fields including the health sciences, finance, criminal justice and demands for explainable AI.

Explainable AI aims to :

  1. Produce more explainable models while maintaining a high level of learning performance (e.g. prediction accuracy)
  2. Enable humans to understand, trust, and effectively manage the emerging generation of artificially intelligent partners.

Goals of Explainable AI:

In general, humans are reticent to adopt techniques that are not directly interpretable, tractable, and trustworthy. The danger is on creating and using decisions that are not justifiable, legitimate, or that can not be explained. Explanations supporting the output of a model is crucial e.g., in precision medicine, where experts and end-users require far more detailed information from the model rather than just a simple binary prediction for supporting their diagnosis.

There is a trade-off between the performance of a model and its transparency. However, with improved understanding and explainability of the system, the correctness of the deficiencies can also be achieved. If a system is not opaque and one can understand how inputs are mathematically mapped to the outputs, then the system is interpretable, this also implies model transparency.

5 Main aspects of the focus of recent surveys and theoretical frameworks of explainability:

  1. What an explanation is?
  2. What are the purpose of goals and explanation?
  3. What information do explanation contain?
  4. What types of explanation can a system give?
  5. How can we evaluate the quality of explanation?

The current theoretical approach of explainable AI also reveals that it does not pay enough attention to what we believe is a key component: who are the explanations targeted to ?

It has been argued that explanations cannot be monolithic and each stakeholder looks for explanations with different objectives, different expectations, different backgrounds, and of course with different needs. How do we approach explainability is the starting point for creating explainable models and allows to set following three pillars on which explanation is built:

🌟 Goals of an explanation.

🌟 Content of an explanation, and

🌟 Types of explanation.

Thankyou for reading this article. I hope you find this useful.

Rupa Singh

Founder and CEO(AI-Beehive)

Author of ‘AI Ethics with Buddhist Perspective’

References:

Christoph Molnar. 2018. Interpretable Machine Learning: a guide for making black box models explainable.

Tim Miller. 2019. Explanation in artificial intelligence : Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi. org/10.1016/j.artint.2018.07.007

Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining Explanations: An Approach to Evaluating Interpretability of Machine Lea

Erico Tjoa and Cuntai Guan. A survey on explainable artificial intelligence (xai): Towards medical xai. arXiv preprint arXiv:1907.07374, 2019

Brent Mittelstadt, Chris Russell, and Sandra Wachter. Explaining explanations in ai. In Proceedings of the conference on fairness, accountability, and transparency, pages 279–288. ACM, 2019.

--

--

Rupa Singh

Founder and CEO AI-Beehive, Author of 'AI Ethics with Buddhist Perspective', Top 20 Global AI Ethics Leader, Thought Leader, Keynote Speaker, Advisory Board