Eye detection using Open CV, Dlib.

Poornachandra Kashi
GeekyNerds
Published in
4 min readMar 8, 2020

Lets get started with this beautiful project. The main motive of the project is to build the python code to detect eyes .Here, you can learn line by line of the code to build the program. The python code that we build will help us to locate the eyes in the face of the person. Please don’t mind my grammar because it’s my first time writing a technical blog.

Implementation using code

Part 1

So we start from the first line of the code, we follow the basic steps in python importing python libraries that we are including in the code.


import cv2 #(We are importing opencv library)
import dlib
import numpy as np #(Collection of large mathematical functions).

If you face any errors such as “ModuleNotFoundError: No module named ‘dlib’”.Then you might have to install the libraries before you run the main file.

pip install dlib(To install Dlib)
pip install numpy
pip install opencv2-python

Part 2

To capture a video we have to create VideoCapture object.
cap = cv2.VideoCapture(0)
Here cv2. shows that we are using cv2 library here. VideoCapture is the object that we created. As only one camera is connected to the device, we pass 0 or(-1). You can select the second camera by passing (1) argument in the VideoCapture.

Part 3

detector = dlib.get_frontal_face_detector()

Initializes dlib’s pre-trained face detector based on a modification to the standard Histogram of Oriented Gradients + Linear SVM method for object detection.

predictor = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)

then loads the facial landmark predictor using the path to the supplied— shape-predictor.
But before we actually detect any facial landmarks, first we need to detect the face in our input image.

Part 4

def midpoint(p1 ,p2):
return int((p1.x + p2.x)/2), int((p1.y + p2.y)/2)

By creating midpoint function and with two input arguments.

Each eye is represented by 6 (x, y)-coordinates, starting at the left-corner of the eye (as if you were looking at the person), and then working clockwise around the remainder of the region:
There is a relation between the width and the height of these coordinates.

Landmarks point in the face.

Part 5

while True: 
#True makes the loop to run infinite times unless it occures with the brake function.
_, frame = cap.read() #Start reading the image of the person throught the camera present in the device.And stores in the variable frame. gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #And the images that are captured are converted from BLUE GREEN RED color format to GRAY scale image. faces = detector(gray) #By using detector fuction its detects the face in the image if the face is present. for face in faces: #By creating a loop using for loop where face is variable which access every element in faces.
#x, y = face.left(), face.top()
#x1, y1 = face.right(), face.bottom()
#cv2.rectangle(frame, (x, y), (x1, y1), (0, 255, 0), 2)

landmarks = predictor(gray, face)
#It predicts the outline of the particular part in the face.
left_point = (landmarks.part(36).x, landmarks.part(36).y)
#Pointing out to the left end of the left eye in the face with point mentioned as 36,refer to the above image Landmarks in the face.
right_point = (landmarks.part(39).x, landmarks.part(39).y)
#Pointing to the right end of the left eye in the face with point mentioned as 39.
center_top = midpoint(landmarks.part(37),landmarks.part(38))
#it refers to the top layer of the left eye of the face.
center_bottom=midpoint(landmarks.part(41),landmarks.part(40))
#it points to the center bottom of the left eye.
#try matching out the numbers with the image I provided in the above.hor_line = cv2.line(frame, left_point, right_point, (0, 255, 0), 2)ver_line = cv2.line(frame, center_top, center_bottom, (0, 255, 0), 2)

Part 6

Lets point out the eyes in the input image frames(we are still in the while loop)

cv2.imshow(“Frame”, frame)
#Here imshow is nothing but projecting the image. Its the function that is written in open cv libraries.So syntex follows this way cv2.function name.
key = cv2.waitkey(1)
#cv2.waitKey() is a keyboard binding function. The function waits for specified milliseconds for any keyboard event.
if(key == 27):
break
#exit while loop

Part 7

cap.release()
cv2.destroyAllWindows()
#
This releases the webcam, then closes all of the imshow() windows.

To execute the file you have to do is, copy the code line by line and save it as a .py file and then the file. So the webcam gets started and its detects your eyes.

OR

You can find the source code here.
Please give a star.

To reach me out:

Linkedin: https://www.linkedin.com/in/poornachandra-kashi-a13529168/

Github: https://github.com/poornachandrakashi

Facebook: https://www.facebook.com/poornachandra.kashi

What’s app number:8904700354

--

--