Performance Analysis of Deep Learning Algorithms: Part 1
CNNs vs Capsule Networks on MNIST Dataset
Before you get started, we assume you are aware of the basics of what Deep learning is. If you are not, it’s alright, we will be providing basic explanations at relevant places along with the code for implementation.
In this series, we will first be testing various known algorithms and measure their performance over a wide range of datasets before we test our own algorithms and approaches.
In this analysis, we will be using the famous MNIST dataset. The MNIST database consists of 60,000 samples handwritten digits which can be used as a training set and has another 10,000 test samples. It is a part of a larger set available from NIST. The digits have been size-normalized and centred in a fixed-size image.
It is a good database for those who want to try learning various techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.
We will be using Keras and Tensorflow backend. Keras provides the MNIST database inbuilt, it can be accessed as follows:
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
Dataset Visualization:
- x_train and y_train are the training data of our model. Validation will be done on x_test and y_test.
- x_train = 60,000 images of handwritten digits of 28 x 28 each.
- y_train = 60,000 labels of the images in x_train.
print(x_train.shape)
print(y_train.shape[1:])
print(len(np.unique(y_train)))
print(x_train.shape[1:])Output:
(60000, 28, 28, 1)
(10,)
2
(28, 28, 1)
This is how the first training image in x_train looks:
Code to visualise the data;
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 4))
plt.imshow(x_train[0])
print(y_train[0])
plt.show()Output:
5
Every image in the MNIST dataset is represented as an array of numbers describing how dark each pixel is. For example, we might think of 1 as something like a 2-D 28 x 28 matrix, where the dark pixels have high values.
Techniques used:
- Convolutional Neural Networks:
When it comes to machine learning, Artificial Neural Networks performs very well compared than other algorithms. These neurons behave similar to the human neurons. They learn the information we passed to them. Neural Networks come in many forms and can be used for a wide variety of purposes. For example, we can use Recurrent Neural Networks more precisely an LSTM for predicting the sequence of words. CNNs are generally used in image recognition applications.
Architecture: CNN has similar learning parameters to the simple neural network. Following are the layers used in CNN.
- Input layer: This layer holds an image input of shape (height, width, 3). where the first and second dimensions are the height and width of the input image and the third dimension is for RGB. Generally, the input image is reshaped such that height = width. It can be thought as a matrix we discussed in data visualization part.
- Convolution Layer: This layer computes the output volume by computing dot product between all filters and the patch of the image. Suppose we have an input image of shape (28,28,3) and 12 filters of size 3 (this means the shape of 3*3*3) with no padding. This filter will iterate through the input image with the stride of 1. Produces an output volume of shape (26,26,12). We can get the output volume of (28,28,12) by applying padding on the borders of the input image of 1.
- Activation Layer: It is an activation function that decides the final value of neuron. After applying convolution, it may be possible that the dot product produces negative values. To remove these negative values many CNNs use ReLu activation. This makes all the negative values equal to zero.
f(x)= max(0,x).
Some common activation functions are RELU: max(0, x), Sigmoid: 1/(1+e^-x), Tanh, Leaky RELU. The output volume remains same eg. (28,28,12). - Pooling Layer: Its function is to progressively reduce the spatial size of the representation to reduce the number of parameters and computation in the network. Pooling layer operates on each feature map independently. The most common approach used in pooling is max-pooling and another is average pooling. Max-pooling pools the maximum value from the patch of the size of filter given. Average pooling pools the average of all the values from the patch of the size of the filter.
- Fully Connected layer: This layer is a simple neural layer which takes input and outputs the volume respective to the number of classes.
- Dropout Layer: This layer is used to reduce the over-fitting in training.
A Quick Intuition:
Let’s start with the pretty simple example, suppose we have an image of a cross. This cross can be expressed as a matrix of 3 x 3 where all the diagonal elements are 1 and others are -1.
Convolutional Neural Networks learns about small features of the image. For example, if a human face is trained on CNN, it will learn the small features of our face such as lines, curves in the early phase of the network. As it proceeds up the hierarchy, it starts learning the complex features such as eyes, nose, lips etc. In the final layer, it predicts that it is a face or not.
So, let's resume our example of a cross. One can say that convolution + relu + pooling = feature extraction. Let’s prove it in the given example. Suppose we take a filter of size 2 as shown in the figure.
Iterating it through our input matrix and taking dot product we get the matrix [(-4, 4),(4,-4) as shown in the figure. This process is called Convolution.
Negative values have been rounded off by using ReLu activation to zero.
And the features which have large dot product have been pooled off by a max-pooling layer which results in a “backslash” like the feature of “cross” input as shown.
It’s understood that we can use a different filter to get the “forward-slash”. That’s why increasing the number of filters will force CNN to learn more and more features.
Program Code:
from __future__ import print_function
import numpy as np
from keras.datasets import mnist
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.optimizers import RMSprop
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 50
img_rows, img_cols = 28, 28
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
#defining the model
model = Sequential()
model.add(Conv2D(32, kernel_size = (3, 3), activation = 'relu', input_shape = input_shape))
model.add(Conv2D(512, (3, 3), activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation = 'relu'))
model.add(Dropout(0.4))
model.add(Dense(num_classes, activation = 'softmax'))
history = model.compile(loss = keras.losses.categorical_crossentropy,
optimizer = keras.optimizers.Adadelta(),
metrics = ['accuracy'])
model.summary()
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model.fit(x_train, y_train, batch_size = batch_size, verbose = 1, epochs = epochs, validation_data = (x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
2. Capsule Networks
Owing to the complexity of the topic, we will here be only giving an outline of what a capsule network does.
Why Capsule Networks?
Let us give you an example, what CNN does is to extract the smallest features and trying to predict the complex features using the previous features. Suppose, CNN predicts a face. What features does a face have? Eyes, Nose, Lips etc. What if we predict on an image where the position of eyes nose and lips are random. Will it predict that to be a face? The answer is yes. CNN does not consider the actual position of the features. Therefore, a CNN is not robust.
The reason for not considering the position of features in CNN is the Max-pooling layer. This is because it ignores most of the instantiation parameters such as pose (position, size, orientation), deformation, velocity, albedo, hue, texture, etc.
That’s why capsule networks came into existence. Instead of taking maximum value it considers all the values in the filter and tweaks their coefficients according to their future prediction.
The detailed architecture used in our analysis can be found in the paper by Hinton titled, ‘Dynamic Routing Between Capsules’. We would highly recommend you to go through the same.
Program Code:
You can access the capsule networks code from the reference implemented by GitHub user XifengGuo: https://github.com/XifengGuo/CapsNet-Keras
# load data
(x_train, y_train), (x_test, y_test) = load_mnist()
# define model
model, eval_model, manipulate_model = CapsNet(input_shape=x_train.shape[1:], n_class=len(np.unique(np.argmax(y_train, 1))), routings=args.routings)def CapsNet(input_shape, n_class, routings):
"""
A Capsule Network on MNIST.
:param input_shape: data shape, 3d, [width, height, channels]
:param n_class: number of classes
:param routings: number of routing iterations
:return: Two Keras Models, the first one used for training, and the second one for evaluation.
`eval_model` can also be used for training.
"""
x = layers.Input(shape=input_shape)
# Layer 1: Just a conventional Conv2D layer
conv1 = layers.Conv2D(filters=256, kernel_size=9, strides=1, padding='valid', activation='relu', name='conv1')(x)
# Layer 2: Conv2D layer with `squash` activation, then reshape to [None, num_capsule, dim_capsule]
primarycaps = PrimaryCap(conv1, dim_capsule=8, n_channels=32, kernel_size=9, strides=2, padding='valid')
# Layer 3: Capsule layer. Routing algorithm works here.
digitcaps = CapsuleLayer(num_capsule=n_class, dim_capsule=16, routings=routings,
name='digitcaps')(primarycaps)
# Layer 4: This is an auxiliary layer to replace each capsule with its length. Just to match the true label's shape.
# If using tensorflow, this will not be necessary. :)
out_caps = Length(name='capsnet')(digitcaps)
# Decoder network.
y = layers.Input(shape=(n_class,))
masked_by_y = Mask()([digitcaps, y]) # The true label is used to mask the output of capsule layer. For training
masked = Mask()(digitcaps) # Mask using the capsule with maximal length. For prediction
# Shared Decoder model in training and prediction
decoder = models.Sequential(name='decoder')
decoder.add(layers.Dense(512, activation='relu', input_dim=16*n_class))
decoder.add(layers.Dense(1024, activation='relu'))
decoder.add(layers.Dense(np.prod(input_shape), activation='sigmoid'))
decoder.add(layers.Reshape(target_shape=input_shape, name='out_recon'))
# Models for training and evaluation (prediction)
train_model = models.Model([x, y], [out_caps, decoder(masked_by_y)])
eval_model = models.Model(x, [out_caps, decoder(masked)])
# manipulate model
noise = layers.Input(shape=(n_class, 16))
noised_digitcaps = layers.Add()([digitcaps, noise])
masked_noised_y = Mask()([noised_digitcaps, y])
manipulate_model = models.Model([x, y, noise], decoder(masked_noised_y))
return train_model, eval_model, manipulate_model
Tabulation of Results:
After training for 50 epochs the statistics are as follows:
Compared with the CNN, modern Capsnet surpasses the old Convolution technique. It was expected to outperform CNN because Capsnet considers what CNN does not. Capsnet not only predicts but also reconstruct the given image in a clear and smooth form, here is the example.
The first five rows are the input digits and the next five rows are their reconstructed images. You can see that the digits having breaks are automatically filled after reconstruction. This is the power of Capsnet.
Following is the training loss and training and validation accuracy graph for Capsnet. Curves indicate that the model is trained right.
Stay tuned as we explore more diverse algorithms on interesting datasets!
This analysis was carried out by Anshul Warade of The Research Nest’s R&D Team.
Clap and share if you liked this one. Do follow ‘The Research Nest’ for more insightful content.