Face recognition using Transfer learning and VGG16

Megha Bansal
Analytics Vidhya
Published in
4 min readAug 26, 2020

Transfer learning is a method of reusing a pre-trained model knowledge for another task. It can be used for classification, regression and clustering problems. It is a long process to collect related training data and rebuild the models. In such cases, Transferring of Knowledge or transfer learning from disparate domains would be desirable.

What is VGG16?

VGG is a Convolutional Neural Network architecture, It was proposed by Karen Simonyan and Andrew Zisserman of Oxford Robotics Institute in the year 2014. It was submitted to Large Scale Visual Recognition Challenge 2014 (ILSVRC2014) and The model achieves 92.7% top-5 test accuracy in ImageNet (dataset).

  • The first and second convolutional layers are comprised of 64 feature kernel filters and size of the filter is 3×3. As input image (RGB image with depth 3) passed into first and second convolutional layer, dimensions changes to 224x224x64. Then the resulting output is passed to max pooling layer with a stride of 2.
  • The third and fourth convolutional layers are of 124 feature kernel filters and size of filter is 3×3. These two layers are followed by a max pooling layer with stride 2 and the resulting output will be reduced to 56x56x128.
  • The fifth, sixth and seventh layers are convolutional layers with kernel size 3×3. All three use 256 feature maps. These layers are followed by a max pooling layer with stride 2.
  • Eighth to thirteen are two sets of convolutional layers with kernel size 3×3. All these sets of convolutional layers have 512 kernel filters. These layers are followed by max pooling layer with stride of 1.
  • Fourteen and fifteen layers are fully connected hidden layers of 4096 units followed by a softmax output layer (Sixteenth layer) of 1000 units.

Transfer Learning Using VGG16

We can add one more layer or retrain the last layer to extract the main features of our image. We can also give the weight of VGG16 and train again, instead of using random weight (Fine Tuning). Here in this task, we have to do face recognition using transfer learning for the model training. We will use pre-defined weights and will freeze the upper layers or the input layers and will use them as they have weights.

I have used Google Colab for training this model.

  1. Keras.applications with TensorFlow as the backend is used for importing the vgg16 model and its weights. include_top is set to False so that the output layer is not included. Otherwise, we will not be able to add one more fully connected layer after it. Then we freeze all the layers.

2) We append new layers with the previous model and print the entire structure of our model by model.summary()

3)I have mounted Google drive to colab to import the image dataset to be used for training and prediction.

4) Importing training images and this will tell us the number of images and classes:

5)Training our layers again to build the vgg model using transfer learning. Then save the model as face recognisation.h5

6)Loading our model and using it to recognize faces:

Output:

For complete code please visit https://github.com/megha1906/TransferLearning-using-VGG16

Thankyou!

For any queries, connect with me on Linkedln.

Published By

Originally published at https://www.linkedin.com.

--

--

Megha Bansal
Analytics Vidhya

DevOps Engineer at JFrog | Linux | Python | Git | Docker | Kubernetes | AWS | Terraform | CI-CD