A Deep Dive Into The Improved Fast R-CNN Approach

Deep Learning for Object Detection and Localization Part II

Saurabh Bagalkar
Alegion
9 min readOct 17, 2019

--

Introduction

Deep Learning for Object Detection Part II — A Deep Dive Into Fast R-CNN is the second article in our Deep Learning for Object Detection series, which explores state-of-the-art, region based, object detection methods and their evolution over time. In this piece, we look at innovations that improve training and testing speed to overcome many of the drawbacks of the R-CNN approach.

In Part I, we saw how R-CNN, pioneered by Ross Girshick and his team, sparked new research interest in region based object detection. The following year, they released a new paper detailing how they suped up R-CNN.

We will take a look at how they improved upon R-CNN and what methodologies were employed to develop a deeper understanding of why this method is called Fast R-CNN. We have also provided sample TensorFlow code to help aid understanding of how the tensors are flowing between layers and how spatial pyramid pooling works in action.

R-CNN vs Fast R-CNN

The landscape of computer vision, and more specifically object detection, has changed dramatically since Convolutional Neural Networks (CNNs) were introduced in 2013. As you can see in the graph below, precision began to plateau at 40% until the advent of deep learning for object detection started to take off. Advancement in research coupled with accessibility to faster and more powerful hardware resulted in precision growing exponentially and current state of the art is around 80%.

There was an meteoric rise in precision after CNN’s were used for object detection. Image Source
R-CNN modules. Image source

As we saw last time, R-CNN predominantly uses region proposals using a selective search (SS) approach to pre-compute the priors. The features between these regions are not shared, so we have to extract features individually for each region-of-interest (RoI) generated using a SS approach. This is not a particularly computationally effective approach and makes real-time object detection inefficient.

Fast R-CNN modules Image source

Fast R-CNN, on the contrary, trains a deep VGG-16 network, 9x faster than R-CNN and is 213x faster at test time, achieving a higher mAP on PASCAL VOC 2012. How does it achieve this humongous gain in speed? Fast R-CNN evaluates the network and extracts features for the whole image once, instead of extracting features from each RoI, which are cropped and rescaled every time in R-CNN. It then uses the concept of RoI pooling, a special case of pyramid pooling used in SPPNet, to give a feature vector of desired length. This feature vector is then used for classification and localization. This method is more effective than R-CNN, because the computations for overlapping regions are shared. Let us unpack the details of the process to gain a deeper understanding.

Architectural Design of Fast R-CNN

Building blocks of Fast R-CNN:

  1. Region proposal network
  2. Feature extraction using CNN
  3. RoI pooling layer — This is where the real magic of Fast R-CNN happens
  4. Classification and Localization
Building blocks of Fast R-CNN Image Source

The Region proposal network and Feature extraction modules are very similar to what we have seen in R-CNN. But instead of passing each cropped and re-scaled RoI, the entire input image is passed through a feature extractor like VGG-16 to produce a convolutional feature map. The features (i.e convolutional feature map) are combined with Region proposal network, which uses a SS approach, to form a fixed-length feature vector in the RoI pooling layer. Each of these feature vectors is then passed along to classification and localization modules. The classification module classifies K+1 (1 for background) object classes using a softmax probability. The localization module outputs four real-valued numbers for each K object classes.

Spatial Pyramid Pooling

Let’s see what Fast R-CNN does differently that makes it much faster than its predecessor. To understand RoI pooling, it is vital to understanding how SPPNet does spatial pyramid pooling (SPP) to extract fixed length output vectors. SPP is heavily inspired by two widely used concepts in image processing and computer vision:

Image pyramids are traditionally used in computer vision to generate filter-based decomposed representations of the original image to extract useful features at multiple scales. It is also used for storing compressed image representations. Based on this concept, let us take a look at the following figure which uses a 3 level SPP.

A network structure with a spatial pyramid pooling layer. Image Source

Imagine we have a seven layer deep CNN. Let’s say the 5th convolutional layer conv_5 has a depth (number of filters) of 256, which means it has 256 individual feature maps stacked in z-axis (number of channels) or depth. Normally, this feature map would have to be flattened and connected to a fully-connected layer as some previous CNN architectures did. But what SPP does differently is to decompose this m x n x 256 (where m and n are height and width of the conv_5) into a fixed length/size 1-dimensional vector by concatenating all image decompositions at varying scale using a max-pooling operation.

The coarsest pyramid level (at the very bottom, indicated in white) uses a global pooling operation that covers the entire image. This creates a 256-d flattened vector. The middle layer (one indicated in green) pools 4 values from each feature map and hence we get 4 x 256-d flattened vector. The top most (blue) pools 16 values from each feature map and hence we end up with 16 x 256-d vector. So, in this example, we have a (256 + 4 x 256 + 16 x 256) 5376-dimensional flattened vector that will be fed to our fully connected layer. For a more in-depth understanding of the math, please refer to this github repo.

Need for Spatial Pyramid Pooling

But why do we have to do the spatial pyramid pooling in the first place? You see, CNNs, which have dense connection representations at deeper levels, traditionally require a fixed input size in accordance with the architecture used. For example, it is common practice to use 227 x 227 X 3 for AlexNet. So why do CNNs require a fixed image size? Let’s analyze this quote from the SPPNet paper:

In fact, convolutional layers do not require a fixed image size and can generate feature maps of any sizes. On the other hand, the fully-connected layers need to have fixed size/length input by their definition. Hence, the fixed size constraint comes only from the fully-connected layers, which exist at a deeper stage of the network.

The authors make a very important point that fully connected layers require a fixed image size. Why? As we know, backpropagation is always done with respect to weights and biases. In convolutional layers, these weights are nothing but the filters or feature maps. The number of feature maps is always independent of the image height and width and is decided a priori. Fully connected layers have a fixed length. If we have images of varying spatial dimensions while training, we cannot comply to a fixed flattened vector, because this will lead to dimension mismatch errors. To avoid this, we have to use fixed image dimensions. Resizing and cropping images is not always ideal because the recognition/classification accuracy can be compromised as a result of content and information loss or distortion.

The spatial bins that we use here (16, 4, 1) have sizes proportional to image size, so the number of bins is fixed regardless of image size. This means we get rid of the size constraint i.e no cropping and resizing of original images. This significantly increases efficiency because now we do not have to preprocess images in a fixed aspect ratio (i.e. fixed height and width) and will always get a fixed flattened vector regardless of what image size you pass in as input.

Spp method can take images of arbitrary size and generate a fixed-length feature vector. Image source

Tensor Flow Code

Let’s look at some tensorflow code to understand how the tensors are flowing between layers and how spatial pyramid pooling works in action. For simplicity of demonstration, I am using AlexNet architecture to illustrate how the tensors are spatially pooled by the introduction of spatial pooling layer. Here is the feedforward architecture of AlexNet that optionally uses spatial pyramid pooling operation. Note: This part does not include any kind of training as I am not back propagating my derivatives nor defining accuracy metrics.

Implementation of Spatial Pyramid Pooling

Here is the implementation of spatial pyramid pooling layer using tensorflow. Try experimenting with different image sizes to verify that no matter the image height and width (although technically for this architecture, the images should be at least 163x163), you will always get a fixed length vector that feeds into your fully connected network.

Breakdown of Parameters

Scroll through this CSV for a complete breakdown of the number of parameters used by AlexNet with SPP along with its architectural design.

AlexNet with Spp architecture design. Inspired from this medium post

ROI pooling layer

(Left) ROI pooling vs (Right) SPP. Image Source

Now, that we understand how SPP works, the only difference in RoI pooling layer is we do not generate separate pyramid layers for each scale, rather just warp them into one layer using all regions-of-interests. This will create one feature map of H x W (height x width), which are hyper-parameters that the user can tune. So imagine in the above spatial_pyramid_pooling.py code, we specify for example n (number of levels) = 1. Let’s say H X W is 7 x 7. So the output of RoI pooling layer will be a vector of length →(2000 * 7 * 7 * 256). The 256 comes from previous layer’s depth or convolutional feature maps, while the 2000 comes from the number of object proposals generated using SPP.

Here is an in-depth look at the original architecture i.e VGG-16 used in Fast R-CNN. Note: This is the original architecture, so its depth and spatial dimensions are different than what we used with AlexNet for demo purposes above.

How tensors flow between layers in original architecture. Image source

In the above image, the input to RoI pooling is 2000x5 tensor of RoI’s selected with SPP and a feature map of depth 512. The 512 is the depth size/number of filters of the previous convolutional layer. Why 5 for RoI representation? It represents [batch_size, x_min, y_min, x_max, y_max] of the RoI’s. These RoI’s are combined with the CNN feature maps (shown as depth of 512 in the above image), which means the RoI’s are scaled by a factor of k, where k is the total spatial downsampling that has occured over its journey from layer_1 to present layer (say from 1000 to 100, so the downsampling ratio is 10 on width of image). Then RoI’s are further downsampled by a factor of 1/H X 1/W and spatial pyramid pooling is performed on all 2000 RoI’s. This part was a bit confusing to me at first, but this snippet from Ross Girschick’s original Caffe implementation made things more clear.

Original Github repo. Spatial scale used in original code is 1/16.

Once we have a vector, or a tensor really, of dimensions [batch_size,width, height, depth] from RoI pooling, we flatten the vector and add a couple of fully connected layers to it. This flattened vector is then used for calculating both classification and bounding box regression loss.

Conclusion

Fast R-CNN achieves a giant leap in improving accuracy and reducing computation time for training and inference because of its smart use of RoI pooling layer enabling it to train the entire image without cropping multiple RoI’s and training them separately. Here, all parameters including the CNN architecture is trained together with a cross entropy function for the class classification and a smoothL1 loss function for the boundary box prediction. Besides this change, Fast R-CNN is very similar in its architecture to its predecessor.

As with all the architectures, Fast R-CNN is still not perfect and has its own limitations as it still requires region-proposals, which are non trainable. So essentially, it is not an end-to-end trainable architecture. In our next blog, we will look at how this was achieved by researchers who aptly called the next version Faster R-CNN.

--

--

Saurabh Bagalkar
Alegion
Editor for

Machine Learning Researcher with interest in Computer Vision,Deep Learning, Localization and the field of perception in general