Explainable AI: Saliency Maps

A blog series where each blog introduces practical tools for explaining machine learning models

Bijil Subhash
3 min readMar 6, 2022

In this article, we are looking at saliency map as a tool to explain computer vision models. Formally, saliency maps are defined as a means to measure the spatial support of a particular class of an image in computer vision models. In the interest of keeping things simple, we can re-define saliency map as an image that highlights the region of interest in our computer vision model. An example of a saliency map, highlighting the pixels that are of important to accurately classify the pictures on the left, is shown below.

Saliency maps (Source: geeksforgeeks.org/what-is-saliency-map)

Why should we care about saliency map?

Neural networks are a powerful architecture that has application in a range of sectors. Over the last decade, we have witnessed its unparalleled capability in approximating a wide range of complex computer vision problems with high accuracy. However, by design, neural networks are also black box models that has no interpretability. This is not ideal and is in conflict with ethical AI principles, preventing us from harnessing its full potential in practical applications. As such, we should put in efforts to understand what our models are learning. Saliency maps are a great tool to understand what convolutional layers are seeing in computer vision, allowing us to use these models in production in an informed manner. It can also be used to troubleshoot models in cases where models are not performing as expected. More importantly, adding interpretability to complex architectures such as neural networks could potentially create a feedback loop, informing the user new knowledge about the domain of interest, and thus becoming a catalyst for driving rapid innovation in that sector.

Saliency Map Implementation

Before jumping into the implementation of saliency map, we need a model. So, I am going to create a binary classification model for cats and dogs, using a Kaggle dataset. The code is shown below. I am not going to explain the modelling side of things in detail, as it is a bit out of the scope, but I hope the comments are helpful enough to follow the logic. Alternatively, you can also use Google’s Teachable Machine to create high quality CNN models rapidly where you have the option to download the model that can then be used in your notebooks for looking at the saliency maps.

Training result

Our model has 87% accuracy, which is a reasonable achievement considering we have only used just under 10% of the total data with minimal optimization. We could have created a better model with some hyperparameter tuning and/or through the use of pre-trained models and more data, but those were not the focus here.

Saliency map is calculated using the Saliency method in TensorFlow. Under the hood, the calculation is backpropagation-like where derivative of the class score is taken with respect to the image, helping us identify which pixels needs to be changed the least to affect the class score the most. For the curious reader, I recommend this paper, which does a deep dive into the math that sits behind the algorithm. Here is the code that I have used for looking at the saliency map of a dog and cat image.

Side by side view of image and their respective saliency map built from the model

There you have it, image and saliency maps of a dog and cat side by side, as interpreted by our model. Hope this quick tutorial assist you in developing a deeper intuition about computer vision models.

--

--

Bijil Subhash

Data engineer from Sydney, Australia. I write about data and automation.