How does a Real-Time Face Recognition work with OpenCV?

Eazy Ciphers
Eazy Ciphers
Published in
5 min readAug 20, 2020

An eazy code from eazy ciphers

Are you keen to learn about the implementation of Real-Time Face recognition?

Here eazy ciphers came with an innovative and eazy way to run a simple Real-Time Face Recognition code where you can predict the face of an individual.

When I talk about the real-time you might get confused about what I am going to discuss here it’s nothing but the implementation of the model which detects faces that appeared on the webcam.

If you want Hands-on experience on this model then go ahead and step forward in its implementation.

In day to day life, facial recognition became a part of the things. So, here is a quick example of Real-Time face recognition before getting into the topic.

When you start registering your face for a smart lock in your phone, tablet, or laptop it asks for the person’s real-time image to capture and that is stored in the database for further clarifications in recognition of that particular person.

This process of approach is done through many iterative ways of prediction with the input image. Similarly, Real-time face recognition works with the implementation of the OpenCV framework python.

These together packed in one combo level to implement a model for the Real-Time purpose.

Face Recognition:

“Face Recognition” the name itself is giving you a comprehensive definition of what it means. Well! getting into it, face recognition is a technical process of execution in identifying or detecting a person's face through the digital medium as the input.

Always the accuracy of face recognition gives a high quality in output rather than neglecting the problem factors which are affecting it. Here to make sure for running our model you must be ensured with the installation of the library in your local system.

pip install face_recognition

If you face any error or confusion in the installation process of the face_recognition library click here to rectify the errors accordingly by the simple procedure of reference.

Face recognition itself can’t give a well-cleared output so the concept of OpenCV implementation came into the scene.

Sample output of face recognition from the pre-recorded video.

OpenCV:

OpenCV is a prominent library in python for the implementation of real-time applications. It behaves as a root of the tree in this computer world.

OpenCV in face_recognition makes a cluster and feature extraction of face image that we train as input. It targets a landmark in the image iteratively trains them within the deep learning method of computer vision.

Installation of OpenCV in your local system

pip install opencv-python

OpenCV detection works as the representation of clustering, similarity detection, and classification of the images using deep learning algorithms.

Why we use OpenCV as a key tool in real-time Face_Recognition?

Humans can easily detect the faces but how can we train the machine to recognize faces? Here OpenCV came to fill the gap between humans and computers and acts as a vision for a computer.

Taking a real-time example when a human meets new people, he memorizes those people's faces for identification in future cases. The brain of an individual iteratively trains the face of the person in the backend. So, when he sees the face of that particular human, he says, “Hey John! How are you?”.

This identification and recognition of face iteratively giving chance for the computer to think the same as a human does.

The Important tool in the computer vision is OpenCV. If we use the OpenCV, it follows the below steps:

  • Data Extraction through input.
  • Identify the face in the image.
  • Extract unique characteristics that build an idea of prediction.
  • Differentiation of the characters in that particular person like nose, mouth, ears, eyes, and all-dominating features in the face.
  • Comparision of the face in real-time face recognition.
  • The final output of a recognized person’s face.

Face_Recognition with OpenCV python:

The code displaying here can also be downloaded from our GitHub repository for your ease of understanding.

Importing all the packages:

import face_recognition
import cv2
import numpy as np

Load and train with the image:

# Load a sample picture and learn how to recognize it.
Jithendra_image = face_recognition.load_image_file("jithendra.jpg")
Jithendra_face_encoding = face_recognition.face_encodings(Jithendra_image)[0]
# Load a sample picture and learn how to recognize it.
Modi_image = face_recognition.load_image_file("Modi.jpg")
Modi_face_encoding = face_recognition.face_encodings(Modi_image)[0]

Face Encodings:

# Create arrays of known face encodings and their names
known_face_encodings = [
Jithendra_face_encoding,
Modi_face_encoding,
]
known_face_names = [
"Jithendra",
"Modi"
]

Main Method:

When the real-time face identification is true then it detects the face and follows these steps in code:

  • Grab a single frame in real-time video.
  • Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
  • Find all the faces and face encodings in the frame of the real-time video.
  • Loop through each face in this frame of video, and checks whether face matches with the existing face or not.
  • If a face doesn’t recognize with the existing faces gives the output as unknown or not known.
  • Else draw a box around the identified face after recognition.
  • Label the identified face with its name.
  • Display the resulting image after recognition.
while True:
ret, frame = video_capture.read()
rgb_frame = frame[:, :, ::-1]
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
# If a match was found in known_face_encodings, just use the first one.
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

# Display the resulting image
cv2.imshow('Video', frame)

Quit:

# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break

Release handle to webcam:

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

Inputs and outputs:

Sample inputs that were provided to the model during the training process….

Input:

Sample image for training the code
Sample Input image to train

Output:

Recording of the output

Our GitHub account for your reference in the code handling: https://github.com/eazyciphers/deep-machine-learning-tutors

--

--