InceptionResNetV2 Simple Introduction

Zahra Elhamraoui
4 min readMay 16, 2020

During my journey in working with transfer learning InceptionResNetV2 pre-trained model surprised me the most, he had the best results among all the pre-trained models I used, if you are curious too, stick until the end of this story.

What is the Pre-trained Model?

A pre-trained model has been previously trained on a dataset and contains the weights and biases that represent the features of whichever dataset it was trained on. Learned features are often transferable to different data. For example, a model trained on a large dataset of bird images will contain learned features like edges or horizontal lines that you would be transferable to your dataset.

ResNet and Inception have been central to the largest advances in image recognition performance in recent years, with very good performance at a relatively low computational cost.

Inception-ResNet combines the Inception architecture, with residual connections.

Residual Inception blocks

  1. Each Inception block is followed by a filter expansion layer
    (1 × 1 convolution without activation) which is used for scaling up the dimensionality of the filter bank before the addition to match the depth of the input.

--

--