AI and the Explainability Paradox

Karthik Vadhri
Intuition Matters
Published in
4 min readOct 4, 2022

The world of Artificial Intelligence (AI) is constantly evolving and changing. Every day, new technologies are being developed that can help make our lives easier. However, there has been a lot of confusion about how these technologies work and what they do. One area where people often struggle with understanding AI is explainability. This means that you need to be able to explain why your algorithms work the way they do in plain English so humans can understand what’s going on under the hood. If this sounds confusing or intimidating at first glance — it shouldn’t! Here’s what it all means:

AI is not perfect, so you need to be able to explain it.

What is Explainable AI?

Explainable AI is an umbrella concept for making Artificial Intelligence interpretable and transparent. In other words, explainable AI tries to make the algorithms behind these computerised predictions more understandable by humans.
This isn’t a new field — it’s been around for decades — but it has become more important recently as people are starting to use AI in their daily lives and want more control over how these systems operate.

XAI is a subdomain of ML which tries to make machine learning models human-interpretable.

Explainable AI helps in building data products that can be understood by humans, not just machines. It also helps businesses to take decisions faster, with better accuracy and less cost than traditional methods such as using rules or expert opinions.

The Crux of XAI — Source : Darpa

Explainability has been a long-standing goal in science, especially in fields like cognitive science focussed on trying to explain how humans think and form explanations. Recently, the need to make machines that reason or think more like humans has driven research on explainable Artificial Intelligence (XAI). Defining causality is one of the most important research topics in machine learning. In computer science there are many different approaches to explainability, often strongly tied to application domain.

Explainability aims at explaining how an AI system works. It’s important for humans to understand how an AI system works and it can be achieved through different approaches:

  • Modelling the reasoning process of human beings, e.g., using supervised or unsupervised learning methods based on their data;
  • Establishing causal relationships between different variables in the system (or groups of variables), e.g., by using Bayesian inference techniques;
  • Showing how an action taken by an agent affects other parts of their environment or other agents’ actions in some way (for example showing whether it generated new knowledge).

To make AI explainable, algorithms can be designed from the beginning with interpretability in mind. There are some types of algorithms that are more inherently interpretable than others, including linear models such as those used in regression analysis, whereas deep neural networks are much more difficult for humans to understand. The availability of open source implementations like SHAP, MAGIE, etc explain most ML algorithms without having to write your own code, has made explainability more accessible than ever.

This article on Towards Data Science covers 7 different implementations of explainable AI in Python. A guide to 7 Packages in Python to Explain Your Models.

Why is Explainability is an important part of AI Adoption?

A business cannot rely solely on data and automation tools when it comes to decision-making–it is crucial that leaders always have access to the reasons behind every choice made by their systems, which will help them avoid potential problems like litigation risks, legal liabilities and operational issues.

Explainable AI will help you avoid litigation risks, legal liabilities and operational issues.

Explainability enhances Trust. Lack of trust is one of the key factors hindering the mass adoption of AI. It breaks the black box concept & provides the capability of providing a justified opinion on business critical decisions.

Explaining complex models becomes increasingly important when employing them in real-world applications. This is even more relevant if such models are used in high-stakes domains, such as healthcare or law enforcement, where an explanation can help users gain trust.

Explainability is also important for accountability and compliance with regulation. For example, if you are building a self-driving car that has been programmed to drive safely on the highway by avoiding pedestrians and other cars at junctions without fail then it is essential for any human driver controlling it to be able to explain why they have done this in a way that demonstrates understanding of the underlying concepts involved (such as “the law” or “the rules”).

Conclusion

To sum up, explainable AI is a concept that encompasses the field of making artificial intelligence interpretable and transparent. XAI is a subdomain of ML which tries to make machine learning models human-interpretable. Explainability has been a long-standing goal in science, especially in fields like cognitive science where efforts try to explain how humans think and form explanations. Recently, the need to make machines that reason or think more like humans has driven research on explainable Artificial Intelligence (XAI). In this article we looked at some of the ways that you can use XAI in your own business or organisation by making algorithms more understandable so they can be used effectively by users.

Hope you enjoyed reading this, and now understand the hype around Explainability and the need to explain AI algorithms in human like language. Please feel free to comment your experience with XAI and how it has helped improve the adoption of your AI models.

Stay tuned on Intuition Matters for more informative articles!!

--

--