Why we use Face Recognition in real life?

Facial recognition is used to identifying a person from a digital image or a video source. Recently, Facial recognition has become popular in big smart cities to monitor people movements and behaviors in 2018. The common reasons to use is for public security and commercial interests to track human interests. A few examples like monitoring public CCTV cameras, immigration checks at airports can for be given for public security . Also regarding commercial interests, we can mention about retail stores, educational schools, medical clinics etc which can use facial recognition to track shoppers, patients, students. How often shoppers comes for these shops ?What people buys most ?How many patients are happy during their visits to clinics using face emotion technology ?

(photo credit: ©iStock.com | LeoPatrizi)

In this article, I would like to present how to use TensorFlow face recognition API in a Docker container to perform real-time video post-processing. It will process each video frame at 1/4 resolution and only detect faces in every other frame of video. Since I used webcam, OpenCV installation is a must. The full code can be find at Github link below shared by Adam Geitgey ;

https://github.com/ageitgey/face_recognition

TensorFlow™ is an open source software library for high performance numerical computation.

This model built using dlib’s face recognition library and has an accuracy over 99% on the labeled faces. Since face_recognition API depends on dlib which is written in C++ ,Dlib contains a wide range of machine learning algorithms. All designed to be highly modular, quick to execute, and simple to use via a clean and modern C++ API .

Dlib is a modern C++ toolkit containing machine learning algorithms

These python codes are recommended to run on macOS or Linux, and Windows is not officially supported, but I was able to run on my computer using windows 7 , so it should work on your windows platform too.

Here is Prerequisites for codes as follows;

  • Python 3.5
  • TensorFlow
  • OpenCV ver 3.4
  • Dlib C++
  • CMake 3.12

Import necessary libraries into your system:

import face_recognition
import cv2

Load a sample picture and learn how to recognize it.

obama_image = face_recognition.load_image_file(“obama.jpg”)
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]

To build our deep learning-based real-time face recognition with OpenCV , we’ll need to access our webcam in an efficient manner. The web camera can be the default device which is pre-installed in our system, a third-party camera connected by cable, or an IP Camera. Commonly, your laptop webcam is the “0” device.

Now,we can start a live video stream using our openCV codes and send it’s live stream using the below codes ;

video_capture = cv2.VideoCapture(0)

Let’s execute the python command for our webcam ,this will take a few seconds to start the camera :

$ python facerec_from_webcam.py

Here is the sample screenshot a running these codes:

Webcam real time face detection

As OpenCV can access your webcam ,so you should see the output video frame for any detected facial images. Detection speed depends on if you are using computer’s CPU or GPU resources.Face recognition can be done in parallel if you have a computer with multiple CPU cores. For example if your system has 4 CPU cores, you can process about 4 times as many images in the same amount of time by using all your CPU cores in parallel.

You will also notice that how persons come closer to the camera, predictions will be more accurate than far distance.

Conclusion

With this post, we learned how to perform real-time face recognition using deep learning with OpenCV using our webcam . If you enjoyed and learned from reading this article, please give it a clap. You can send me a note if you have any questions/comments/ideas via linkedin.

Resources:

https://github.com/ageitgey/face_recognition
https://pypi.org/project/face_recognition/
http://dlib.net/
https://cmake.org/
https://www.tensorflow.org/