House Safety using Computer Vision by OpenCV and NumPy: PART-2

Yukti Deshlan
Nur: The She Code Africa Blog
2 min readApr 16, 2021
Photo by Thought Catalog on Unsplash

To understand this article better, I would advise that you read Part-1 if you have not already done that.

Training the model:

Step-5: importing the modules for the second part of the project.

#importing the required modulesimport cv2
import numpy as np
from os import listdir
from os.path import isfile, join
from playsound import playsound

Step-6: Add the path to the before-made dataset and convert it to grayscale.

#Refering the path to the Datasetdata_path=’**path file of the dataset**’
onlyfiles=[f for f in listdir(data_path) if isfile(join(data_path,f))]
Training_Data,Labels=[],[]for i, files in enumerate(onlyfiles):image_path=data_path+onlyfiles[i]images=cv2.imread(image_path,cv2.IMREAD_GRAYSCALE)Training_Data.append(np.asarray(images,dtype=np.uint8))Labels.append(i)Labels=np.asarray(Labels,dtype=np.int32)model=cv2.face.LBPHFaceRecognizer_create()model.train(np.asarray(Training_Data),np.asarray(Labels))print(‘model training complete!!’)

Step-7: Using haar-cascades you help to detect the face and then appoint the below dimensions to read the facial features.

def face_detector(img,size=0):gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)faces=face_classifier.detectMultiScale(gray,1.3,5)if faces is():return img,[]for(x,y,w,h) in faces:cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,255),2)roi=img[y:y+h,x:x+w]roi=cv2.resize(roi,(200,200))return img,roicap=cv2.VideoCapture(0)

Step-8: Finally, you train the module to match it with the dataset of images of your face and download an alerting ‘beep sound’ for the cause of an unknown.

while True:ret, frame=cap.read()image, face=face_detector(frame)try:face=cv2.cvtColor(face,cv2.COLOR_BGR2GRAY)result=model.predict(face)if result[1]<500:confidence=int(100*(1-(result[1])/300))display_string=str(confidence)+’% confidence it is user’cv2.putText(image, display_string,(100,120),cv2.FONT_HERSHEY_COMPLEX,1,(250,120,255),2)if confidence>75:cv2.putText(image,”the user”,(250,450),cv2.FONT_HERSHEY_COMPLEX,1,(0,255,0),2)cv2.imshow(“Face Cropper”,image)else:cv2.putText(image,”unknown”,(250,450),cv2.FONT_HERSHEY_COMPLEX,1,(0,0,255),2)cv2.imshow(“Face Cropper”,image)playsound(‘**Add the path to your audio file**’)except:cv2.putText(image,”Face not found”,(250,450),cv2.FONT_HERSHEY_COMPLEX,1,(255,0,0),2)cv2.imshow(“Face Cropper”,image)passif cv2.waitKey(1)==13:breakcap.release()cv2.destroyAllWindows()

Output: Now, when you run the code it would start to scan your face, and likewise tell if you are ‘the user’ or alert you with a sound in case it is a stranger. The implementation should look something like this,

the final output of the project

Future more developments you can implement:

• This is just the software part of the project, we further want to enable this as a full-fledged project with the working of the Arduino, raspberry pie (and other hardware) as including the hardware part of the project would require some resources.

• We can build an interface, like a web application or an android application to further making it accessible to everyone (with less expense) and having a record/history.

• Further changes like adding more datasets of people with criminal records from legal authorities can be merrier.

• Data can be updated concerning the number of people coming and going, from time to time due to the simplicity of the project.

References

Following references are just for inspiration and nothing of them is directly copied or taken from.

Refer to the code at my GitHub Account and connect with me on my Linkedin Id, and if you like reading my articles do share it with your friends and family!

Thank you!

--

--

Yukti Deshlan
Nur: The She Code Africa Blog

Student at VIT, Bhopal || Web developer || ML enthusiast || Tech writer