Resnet Block Explanation with a Terminology Deep Dive

Within neural networks, convolutional neural networks (CNN’s) have been at the forefront in AI image recognition tasks. 3 years ago, this ‘Deep Residual Learning for Image Recognition’ paper demonstrated an important technological breakthrough — Deep Residual Learning framework’ in achieving better results in AI image recognition competitions.

The ‘Residual Learning’ (Resnet) Building Block as shown in the diagram (Figure 2 in the paper) below is the key to understanding the above paper.

Shortcut Connections for Identity Mapping in the ‘Deep Residual Learning for Image Recognition’ paper.

Recently, I wanted to dig deeper into understanding the functionality of the same. As can be seen above, there is no easy reference explanation of the terms used (x, weight layer, F(x), ReLU) in the above diagram that is simple to understand with reference to a standard neural network terminology.

To help clarify my own understanding, I tried to find any similar content that detailed the terminology used in the above diagram.

On not finding one, I first prepared a Powerpoint presentation that builds up the neural network terminology (W*X + B) where W is the Weight Vector, X is the Input Vector and B the Bias Vector. The same was done in this article in a step by step manner and then proceeds to explain the above Resnet Block diagram with respect to this terminology.

The presentation consists of the following slides :

Slides 1–5 : Introduction

Slides 6–10 : W, X & B Vectors

Slides 11–13 : 1st weight layer F(X) Calculation

Slides 14–16 : 2nd weight layer F(X) calculation

Slides 17–20 : Final calculation of F(X)+X & the subsequent ReLU operation

Slides 21–23 : Conclusion

Resnet Block Explanation Presentation — Final Slide with all the terminology

Slide 20 with all the mathematical terms is shown above. The link to the full presentation is given below.

http://www.kaytek.co.in/ai/8H31_Kaytek_Residual_Block_Explanation.pdf

Welcome all your feedback and suggestions.