Auto-capture Selfie by Detecting Smile

Akash Bhandari
Analytics Vidhya
Published in
3 min readSep 8, 2020

Get beautiful selfies automatically captured when you smile — Python Project to automatically detect and capture selfies.

Everyone loves a smiling picture, so we will develop a project which will capture images every time you smile. This is a simple machine learning project for beginners and we will use the OpenCV library.

What is OpenCV?

OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in commercial products.

Project Prerequisites

To implement this project we need to know the following :

1. Basic concepts of Python
2. openCV basics.

To install the library, you can use pip installer from the command line:

  1. pip install OpenCV-python

Download training Code & XML Files

Steps to Develop the Project

Steps Involved to implement Smile Detection and Selfie Capture Project

  1. We first import the openCV library.
  2. Now start webcam in the second line using the VideoCapture function of cv2.
  3. Then, include haarcascade files in the python file.
  4. Video is nothing but a series of images so we will run an infinite while loop for the same.
  5. Then we are reading images from the video through read().
  6. As feature recognition is more accurate in gray images we will convert the image to gray image using cvtColor() and BGR2GRAY which are basic openCV functions.
  7. Now we will read faces using an already included haarcascade file and detectMultiscale() function where we pass gray image, ScaleFactor, and minNeighbors.
  • ScaleFactor: Parameter specifying zoom image, accuracy depends on it so we will keep it close to 1 but not very close as if we take 1.001(very close to 1), then it would detect even shadows so 1.1 is good enough for the face.
  • minNeighbors: Parameter specifying how many neighbors each rectangle should have to retain it.
  1. If it detects a face we will draw an outer boundary of the face using rectangle() method of cv2 containing 5 arguments: image, initial point (x, y), an endpoint of principal diagonal (x + width, y + height), color of the rectangular periphery and last parameter is the thickness of drawn rectangular periphery.
  2. If the face is detected then we will similarly detect a smile and if a smile is detected too we will print Image<cnt> saved in the cmd/terminal and then we have to provide the location of the folder in which we want to save the images.
  3. To save the images we will use imwrite() which takes 2 parameters- location and image.
  4. To prevent memory overflow we will just save 2 images in one run and thus useif statement which breaks the loop if cnt>=2.
  5. To break infinite loop, we have used an if statement which becomes true when we press ‘q’ denoting ‘quit’.
  6. At last, we will release the video.
  7. Do not forget to destroy all the windows.

Code:

import cv2
video = cv2.VideoCapture(0)

faceCascade=cv2.CascadeClassifier(“G:/dataset/haarcascade_frontalface_default.xml”)
smileCascade = cv2.CascadeClassifier(“G:/dataset/haarcascade_smile.xml”)

while True:
success,img = video.read()
grayImg = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(grayImg,1.1,4)
cnt=1
keyPressed = cv2.waitKey(1)
for x,y,w,h in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,0),0)
smiles = smileCascade.detectMultiScale(grayImg,1.8,15)
for x,y,w,h in smiles:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(100,100,100),0)
print(“Image “+str(cnt)+”Saved”)
path=r’G:\dataset\img’+str(cnt)+’.jpg’
cv2.imwrite(path,img)
cnt +=1
if(cnt>=2):
break

cv2.imshow(‘live video’,img)
if(keyPressed & 0xFF==ord(‘q’)):
break
video.release()
cv2.destroyAllWindows()

--

--