Visualizing How Convolution Neural Networks “See”

Eric Muccino
Mindboard
Published in
3 min readSep 10, 2019

Convolution Neural Networks (CNN) are a class of neural network architecture that are designed to learn image recognition tasks in a way that mimics the human visual system. CNNs learn a series of filters that scan images. Each filter learns to recognize a unique feature, activating neurons when the respective feature identified. The activated features are passed along to the next layer of filters within the network and form a new image for the next set of filters to observe. Typically, shallow layers learn to identify low level features such as curves and edges, while deeper layers learn to recognize high level features such as eyes or windows, depending on the task.

While CNNs are often considered to be uninterruptible “black boxes”, we can use a clever technique to peer into the mind of CNNs and visualize exactly what they are learning to look for when making a certain classification. In this post we will see a simple way how to do this using the Keras library.

Visualization Network

To visually inspect what a neural network is looking for when making classifications, we will attempt to engineer an image that maximizes a filter in a given convolutional layer. This image will demonstrate a “dream” that our trained image classifier will have relating to a feature it has learned to recognize. To do this, we will use a similar technique that was explored in a previous post.

The idea is to create a neural network that is composed of an initial dense layer that is applied to a single unit input. The initial dense layer will contain a set of weights, each corresponding to a pixel in our “dream” image. The dense layer outputs are reshaped into the shape of an image and fed into our trained CNN, producing a classification output. We freeze the trained CNN layers so that only the weights in the initial dense layer can be altered.

The network is trained with a custom loss function that maximizes a specified filter’s activation. In Keras, we can apply this loss through a custom regularization function applied to the chosen layer. The network is trained using Stochastic Gradient Descent, producing an image that resembles the essence of what that filter has learned to recognize. Here is the code:

Results

Here are some images generated by the model:

Shallow Layer Filters
Mid Layer Filters
Deep Layer Filters

Masala.AI

The Mindboard Data Science Team explores cutting-edge technologies in innovative ways to provide original solutions, including the Masala.AI product line. Masala provides media content rating services such as vRate, a browser extension that detects and blocks mature content with custom sensitivity settings. The vRate browser extension is available for download via the Chrome Web Store. Check out www.masala.ai for more info.

--

--

Mindboard
Mindboard

Published in Mindboard

Case Studies, Insights, and Discussions of our Modernization Efforts

Eric Muccino
Eric Muccino

Written by Eric Muccino

Data Scientist at Mindboard Inc.

No responses yet