Let the Face meets Machine Learning

Dealing with Faces is one of the most interesting tasks of Machine Learning. In this post I will try to show how you can use ML to do magnificent tasks on faces.

Special thanks to Adrian Rosebrock, his blog at https://www.pyimagesearch.com/ is inspiring and full of fantastic tutorials.
Without him I could not write such a post.

1-Face Detection using Haar features

In the beginning, there were Haar features fighting invented by proposed by Paul Viola and Michael Jones as a alone as a feature descriptor used in computer vision and image processing for the purpose of object detection. Using these features you train an object detector which can localize the face. Since this is already implemented, we are luck to have a number of classifiers ready made for you to apply directly without facing the burden of training.

Usually these classifiers are built as Cascade Classifier for Object Detection. By “cascade” in the classifier name we mean that they actually have a number of several simpler classifiers that are applied in order to image until at some stage the candidate is rejected or all the stages are passed.

This sample code uses a ready made face detector encodes as an xml file which is “haarcascade_frontalface_default.xml” , available with opencv.

import cv2
imageFileName=”image1.jpeg”
image = cv2.imread(imageFileName)
grayscale_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
face_cascade = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)
faces = face_cascade.detectMultiScale(grayscale_image, 1.25, 6)
for (x,y,w,h) in faces:
 cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
cv2.imshow(‘image’,image)
cv2.waitKey(0)

Face Detection using Haar classifier

There is a complete list of Haar cascade classifiers that you can simply use in you code at:- https://github.com/opencv/opencv/tree/master/data/haarcascades

2-Facial landmarks detection

The simplest way to do it is by using the dlib, initialize dlib’s face detector (HOG-based) and then create the facial landmark predictor using a predictor of your choice like shape_predictor_68_face_landmarks.dat as follows

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)

68 Facial landmark points

Then the code will iterate over each face detected and find key points that define the face.

Here is sample demo of the output of the facial landmark detection with 68 keypoints.

If you want more information about this application with sample code, you can check https://www.pyimagesearch.com/2018/04/02/faster-facial-landmark-detector-with-dlib/

You can also just use the 5 facial landmarks, including which is faster and only return 2 points for the left eye,2 points for the right eye and 1 point for the bottom of the nose.

Another approach will be to use HOG features fed to a Linear Support vector machine SVM classifier. If you will detected using this approach, you will need to also apply a sliding window with Non Maximum suppression(NMS) to detect multi faces in an image.

3-Face classification using Deep Learning

This is actually one of the very impressing things that ML can do. As now we can detect faces why not understand the emotion of that face, a lot of researchers worked on this but my favourite is the one done in the research “Real-time Convolutional Neural Networks for Emotion and Gender Classification” . where the code is available at https://github.com/oarriaga/face_classification

Here is sample demo of the output of this code to me and my son :)

4-Detect drowsiness of a driver using dlib

1-Find a face 
2-Apply facial landmark detection to extract the eye regions, you can use shape_predictor_68_face_landmarks.dat
3-Compute the eye aspect ratio
4-If the eye aspect ratio is less than a certain threshold for a certain time, this mean the driver fell asleep

Reference:Real-Time Eye Blink Detection using Facial Landmarks

The following video shows a demo for drowsiness detection

5-Face Tracking

The simplest way to do face tracking is to detect faces using a deep learning face detector like res10_300x300_ssd_iter_140000.caffemodel and assign Ids then use centroid tracking to keep I.D. attached to the tracked face.

The flowing demo shows this face tracking on action