Visual Interpretability — Explainable AI Visualization (Part 2)

Parvez Kose
DeepViz
Published in
3 min readApr 6, 2022

This article continues the background overview of the research ‘Explainable Deep Learning and Visual Interpretability.’

An Introduction

Deep learning has led to unprecedented breakthroughs in many areas, such as computer vision, voice recognition, and autonomous driving. It has recently proved very powerful at solving large-scale real-world problems. It has been adopted in many large-scale information processing applications like image recognition, language translation and automated personalization. We hope these same techniques can diagnose deadly diseases, make trading decisions, and do many other things that will potentially transform our lives and industries.

While a deep neural network learns efficient representations and enables superior performance, understanding these models remains challenging due to their inherently opaque nature and unclear working mechanism. They are often considered black-box methods that perform assigned tasks for the users.

With a clear understanding of how and why a model works, it is easier for a user to determine when a model works correctly, when it fails and how it can be further improved. Hence, users treat neural networks as black boxes and cannot explain how mapping from input to output data was done or determine the reasons for its predictions. This lack of transparency is a drawback to their application involving high stake decision-making, especially in regulated industries where it is required to use techniques that can be understood and validated.

Additionally, automated decisions made by these models have far-reaching societal implications: widening social class and race inequality and underpinning bias and discrimination in their systems. Consequently, transparency and fairness problems have been gaining more attention lately, and efforts are being made to make deep learning models more interpretable and controllable by humans, including building models that can explain their decisions, detect model bias and establish trust and transparency in how they would behave in the real world.

Deep learning models are more complex to interpret than most machine learning models as they learn complicated representations to extract and present in a human-readable form. While it may be suitable for certain models, it is not entirely true for a vision-based model like a convolutional neural network (CNN) because the representations learned by a deep network like CNN are highly responsive to visualization part because they are representations of visual concepts.

This work proposes a visual exploration tool — DeepViz, which uses an explainable system approach, image localization, and visualization techniques to interpret a visual classification task inference that justifies the model decision using visual evidence using the following methods. Using visual evidence, the tool jointly predicts a class label and shows why the anticipated title is appropriate for a given image.

  1. Image Sensitivity: helps highlight the image region attributed most to the classification decision by localizing the detected feature or object in the input image.
  2. Activation Graph: Visualizing intermediate outputs of the hidden layers to show how the network transforms an input through successive layers.

The next section presents a background overview of the historical context of deep learning and origins of neural networks. Part 3 gives an overview of the common architectures of deep neural networks. Then discusses the problems and challenges of the black box models and concludes by highlighting the societal implications of the black box system and their wide-ranging impact in various public domains. Part 4 highlights the importance of explainable and interpretable systems and the recent development in this research area. Part 5 describes the methodologies used in the research process, research hypothesis, design goals and challenges. Part 5 also covers implementation details, including technical design and environmental setup. Chapter 4 concludes with the research results and findings from prototype user testing.

The next article in this series covers the timeline of development in deep learning:

https://medium.com/deepviz/explainable-deep-learning-and-visual-interpretability-part-3-a3ea472fd5ba

--

--

Parvez Kose
DeepViz

Staff Software Engineer | Data Visualization | Front-End Engineering | User Experience