Why Interpretability Matters — Explainable AI Visualization (Part 9)

Parvez Kose
DeepViz
Published in
6 min readFeb 17, 2023

This article continues the research for the study ‘Explainable Deep Learning and Visual Interpretability.’

The interpretability research investigates how to make sense of what specific feature a network has detected, what inputs neurons are most receptive to and how that correlates to the class prediction. We studied existing methods for interpretability, a general approach to how a neural network learns and operates internally that considers the magnitude of each detected feature inside hidden layers.

The role of interpretability in artificial intelligence has gained significant attention in recent years. With the growing success of neural networks and deep learning, there is an emerging need to explain their decisions, identify bias in the model and dataset, and assess these models for potential risk of cultural and social harms in the automated system. The issue of black box models has been a primary concern and a significant focus of discourse in the last few years among researchers, practitioners***, and businesses operating in regulated industries, such as credit, insurance, legal and benefit systems.

The Need for Interpretable Systems

Over the last few years, algorithmic bias and accountability have been a concern among consumer advocates, regulators, lawmakers and diversity moderators. An example of this scenario is a 2017 report from the AI Committee of the British Parliament that states that the development of intelligence AI systems is a fundamental necessity if an AI system is to become an integral and trusted tool in society. Whether it takes the shape of explanation, technological transparency, or both depends on the context and stakes involved in its application and sector. However, in most cases, the report ensures that explainability is useful for the citizens and consumers.

Without a clear understanding of how and why a model works in a certain way, developing these models relies on a time-consuming trial-and-error process. Consequently, both researchers and practitioners are facing challenges with sophisticated models that demand more transparency and explainable systems to understand and analysis of these models better. Whether it’s a financial decision, a medical decision or maybe a military decision, one cannot rely on a black box method. It’s time to act on making these decisions more transparent and understandable before the technology becomes even more pervasive.

As deep learning spreads across domains, we must equip users of deep learning with tools for understanding when a model works correctly, when it fails, and ultimately how to improve its performance. Since deep learning is more pervasive in public domains, it is essential to verify for a certain task that the accuracy results from properly framing a problem statement and that there are no exploitation of artifacts in the data.

Therefore, techniques for interpreting and understanding what the model has learned have become a key ingredient of a robust validation procedure. Interpretability is critical in applications such as medical diagnosis, credit approvals and self-driving cars, where the dependency of the model on the correct features should be guaranteed.

Interpretable Methods

In this section, I briefly discuss the concept of interpretability in the context of neural networks and then explore methods for interpreting and understanding deep neural networks.

Interpretability is the point at which a human can understand the causes of a decision. Another definition is that interpretability is the degree to which a human can consistently predict the model’s result.

The higher the interpretability of a machine learning model, the easier it is for someone to understand why certain decisions have been made. A model is better interpretable when its decisions are more accessible for a human to comprehend than decisions made by them.

In general, the concept of interpretability center around human understanding and comprehension of the object in question. However, in the context of neural networks, they vary in the aspect of the model to be understood: its internals, operations, mapping of data, or representation. Although recent work has begun to standardize the definition of interpretability, a formal and commonly agreed-upon definition remains open.

In our case, interpretability is the process of generating humanly understandable explanations of why a neural network model makes a particular decision. Since the learning system hides the entire decision process behind the complicated inner workings of deep neural networks, it becomes difficult to obtain interpretations and explanations for their decisions.

Information Visualization

Information visualization is the study and usage of visual representation of abstract data to reinforce human cognition. It can help discover unstructured actionable insight, exploring and understanding the patterns in the data to communicate essential aspects of the dataset in a concise, easy-to-understand fashion.

More than ever, data visualization has become critical to AI. Data analysis is unarguably an indispensable part of the machine learning project pipelines. The AI development pipeline often begins with the data exploration phase, also known as exploratory data analysis, to help with data analysis and evaluate different approaches to solve the problem. It has primarily been done using a fundamental data analysis approach in visualization, such as histograms, plots, charts, graphs and other types.

Visual Analytics System

Visual Analytics combines information visualization and scientific visualization and focuses on analytical reasoning enabled by interactive visual interfaces. It’s an amalgamation of computer science, information visualization, cognitive and perceptual sciences, interactive design and social science.

Visual Analytics systems have been developed to inspect artificial neural networks since visual feedback is considered highly valuable by practitioners and researchers. Although modern visual analytics systems provide sophisticated visualization and rich interaction interface, there is a need to loop humans into the analysis process and improve the relevance of the explainable deep learning techniques.

Visualization and Interpretability

In this section, I highlight the importance of interactive visualization in neural network interpretability and discuss some key initiatives in deep learning visualizations.

The growing complexity of machine learning models and the critical need for understanding their inner workings has increased. Visualization is a powerful tool to fill this critical need. It has wide-ranging benefits, from explaining the rationales of an AI decision, detecting model bias and dataset, to developing trust and confidence in model behavior in the real world.

Visualization can play an essential role in enhancing the interpretability and explainability of deep learning models. It can help provide an in-depth understanding of how deep learning models work. Combined with the XAI explanation approach, it can help bring more insight into the often obfuscated complex AI system. It can be used to explain how AI techniques work and help demystify the training and inference process of the model.
Visualization and machine learning interpretability aims to provide human insight from data and deal with hard-to-quantify values.

Visualizing and interpreting neural networks through visualization is currently emerging as a promising research field. it has also attracted research at the intersection of machine learning, information visualization and human-computer interaction (HCI) to build unique solutions with effective human interactions and experience design.

The growing interest in this field has been reflected in several international conferences and workshops dedicated exclusively to interpretability and visualization for machine learning at Visualization for Machine Learning (2018). These topics have also become the key concerns in a panel discussion at premier venues such as NeuRIPS 2016 Workshop on Interpretable ML for Complex Systems and ACM Intelligent User Interfaces Workshop on Explainable Smart Systems (EXSS 2018). Besides, a growing number of research papers on visualization and interpretability for deep learning have been published.

*Explainable Artificial Intelligence Research at DARPA — Further, the report upholds that it is not acceptable to implement an artificial intelligence system that could substantially impact an individual’s life unless it can provide a comprehensive and satisfactory explanation for its decisions. It means delaying their deployment and use for such cases until the system has been vetted and a credible alternative solution is found.

The next article in this series covers the explainable artificial intelligence technique in detail.

--

--

Parvez Kose
DeepViz

Staff Software Engineer | Data Visualization | Front-End Engineering | User Experience