Facial age prediction with Fastai2

Ajaykumaar S
May 23 · 4 min read

Not so long ago, age prediction applications were quite trending among the iOS phone users. In this post, we will create a Deep Learning model that predicts the age of a person based on their facial image.

Let’s get started….!!

We will be using Fastai2 library for this model. It contains all the sub-libraries needed for NLP, Recommendation systems and computer vision. For this computer vision task, we will use the vision sub-library.

!pip install fastai2 -q
from fastai2.vision.all import *
from fastai2.basics import *

We need a dataset that contains facial images along with their age. Our model should extract the features of these images by passing it through several layers of matrix multiplications and output a number(i.e. age), which is called the ‘Forward pass’. Now, this predicted age should be compared with the actual age to calculate loss and go back and change the values(weights) of our matrices and this, not so surprising, is called ‘Backward pass’.

Let’s get the data from Kaggle(link below) and upload it to our Google Drive.

Once uploaded to Drive, start the Google Colaboratory environment, connect to a runtime and mount the Drive. We need a path to where our data is stored.

path=Path('/content/drive/My Drive/face_age')

In this dataset, there are 99 folders named by the corresponding age of the people’s images in that folder. So the output, y value, is the label of the folder.

Now we shall create a get_y function which gets the name of the folder and converts it into an integer to do the regression. The Pipeline is particular to Fastai and it facilitates the processes to happen in a sequence.

def to_num(x:str): return int(x)
get_y= Pipeline([parent_label,to_num])

We’ll use the DataBlock API to get the data, apply transformations, augmentations, split them into training and validation sets, get the y-value and normalise.

item_tfms=Resize(240, method='squish'),
batch_tfms=[*aug_transforms(size=224, max_warp=0, max_rotate=7.0, max_zoom=1.0)]

Now, let’s make this data block into a data loader with a batch size of 64.

Output of dls.show_batch()

Let’s create a default CNN learner using the cnn_learner() function and let’s use resnet18 architecture. As this is a regression problem, it is mandatory to specify the y_range.

learn=cnn_learner(dls,resnet18,loss_func=MSELossFlat(), y_range=(10.0,70.0))

Now let’s train the model, i.e. fit, for 5 epochs and make a prediction on an image.


The output age I got was 50.8 and of course it might vary each time you run the model. It is because of the randomness in parameter initialisation.

That, by all means, is a fairly good result, yet still, we’ll hack into the cnn_learner() and customise it with a new architecture, activation function, self-attention and optimiser. If we look inside the learner.py notebook in Fastai’s Github repo, what cnn_leaner() does is it creates a cnn_model() and passes the model into a Learner() and the cnn_model() call the create_head() and create_body() to create the model.

We shall copy the create_body() function and make some changes so that we can pass in an activation function and self-attention. (changes made in the respective lines are shown below)

def create_custom_body(arch, n_in=3, pretrained=True,act_cls=nn.ReLU(),sa=False, cut=None):
model = arch(pretrained=pretrained,act_cls=act_cls,sa=sa)

Now, we shall use xresnet18's architecture, use Mish as activation and set self-attention as True.

body=create_custom_body(xresnet18, pretrained=True, act_cls=Mish, sa=True)

To create the head we need to find the number of outputs from body and number of outputs from the head. We’ve to double the number of input features to the head because our head will contain average pooling and max-pooling layers. As we are doing a regression we have to set a y_range (i.e. output boundaries)

nf=num_features_model(nn.Sequential(*body.children())) * 2; nf
head=create_head(nf,dls.c, y_range=(0,100))

Now that our head is ready, we can pass our head and body into a nn.Sequential() and initialise our head.

model=nn.Sequential(body, head)
apply_init(model[1], nn.init.kaiming_normal_)

We can now pass our model into a Learner(). Since Fastai uses discriminative learning rate, we need to spilt the model so that each set will be trained at a different learning rate accordingly and the split function can also be got from the same learner.py notebook. Also, we’ll use ranger optimiser which is nothing but a RAdam optimizer passed to a LookAhead().

def _xresnet_split(m): return L(m[0][:3], m[0][3:], m[1:]).map(params)
learn=Learner(dls, model, loss_func=MSELossFlat(), splitter=_xresnet_split, opt_func=ranger)

Now our learner is ready! We shall freeze the learner and train it and roughly after 10 epochs we can see the validation loss heavily drops and now we can do a prediction on an image.

As we can see, the loss has not yet started to shoot up which means we can still train for few more epochs with reduced learning rate. Also, we can unfreeze the model and train for a few more epochs to get even better results. Another trick that could improve the accuracy is to increase the size of the images, say from too 240 to 360, and train it.

You can get the full code from the GitHub link below and check out Jeremy Howard’s course on Deep Learning to know more about Fastai.

Thank you and keep learning!

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data…

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store