Real Time Mask Detection Using YOLOv5

Simple project for real-time object detection

Ahmad Arbain
6 min readJun 23, 2022

Hallo … I’m Ahmad Arbain it’s my first documentation in medium, hoping my post will be helpful for everyone who reads it. Talking about Covid-19 at this time, we are currently living alongside covid-19. Even though the government has lifted the face mask regulation in outdoor spaces, we are still required to wear them indoors or in crowded areas.

In this article, I will explain a project I worked on during my internship, about real-time face mask detection using a deep learning algorithm called Yolo and Opencv. Yolo, which stands for You Only Look Once, is a deep learning algorithm that utilises a convolutional neural network (CNN) to detect objects.

“YOLOv5 is the latest product in YOLO series. YOLOv5 is improved on the basis of YOLOv4, and its running speed is greatly improved, with the fastest speed reaching 140 frames per second. Meanwhile, the size of YOLOv5 is small, and the weight file is nearly 90% smaller than that of YOLOv4, which enables YOLOv5 to be deployed to embedded devices. Compared with YOLOv4, YOLOv5 has a higher accuracy rate and better ability to recognize small objects.”

Source : https://jurnal.polibatam.ac.id/index.php/JAIC/article/view/3484/1616

Firstly, to build the machine learning model, we need a dataset whose function is to be used as training data for the face mask. For this dataset, we need data images with mask and non-mask labels, we can download the images on Google and then download them by web-scraping using a script on this GitHub repository. There are two ways to use this script, the first is to download the ZIP or clone the repository.

Repository Github web-scraping images.
#Clone Repository Scrape Image
!git clone https://github.com/irzaip/scrapegimg.git

After cloning the repository, next is to follow the steps listed in the README.md file.

#Steps to do web-scraping1. Open Google Chrome,
2. Go to google image
3. search images based on your keywords
4. scroll down to add the displayed image
5. Open console (developer tools)
6. select all + copy and paste code to capurl.js to the console
7. after ENTER will create a url.txt file for download to your computer
8. download images using python3 ./scrape.py -u url.txt -o outputdir
9. Arrange and organize your pictures..
NOTE : Make sure in step number 8 to create a folder where the download is available.

The next step after successfully collecting images for the dataset is labelling those images. The purpose is to create training data for the yolov5 model. To do the labelling, you can use a tool called makesense.ai. To use this tool you need to prepare two folders in the form of an image folder and a labelled folder, each consisting of train and val folders.

Dataset folder images and labels

Note: The amount of train data is 90% and val data is 10% of the total number of images..

train and val folders on each of the image and label folders

Training and validation images are stored separately in two folders.
Next, to carry out the labelling process, you can follow these steps by accessing the makesense.ai page.

Face Mask Dataset Labeling

After labelling, prepare the labelled dataset to train yolov5.

Face Mask Dataset Labeling

Perform labeling on the train and val image datasets then save the results to the train and val labels folder.

The next step after labelling the dataset is making a face mask model. In this tutorial, I will use yolov5, you can access the yolov5 GitHub repository here.

Step-1
First to use yolov5 i.e. access the github page then select and click Google Colab.

Yolov5 environment for google colab

Step ke-2
Zip the dataset then upload it to Google Colab:

Uploaded dataset

Step ke-3
Before starting training on google colab for yolov5, prepare a custom.yaml file. It works as a control file for the training dataset uploaded to Google Colab.

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# COCO 2017 dataset http://cocodataset.org by Microsoft
# Example usage: python train.py --data coco.yaml
# parent
# ├── yolov5
# └── datasets
# └── coco ← downloads here (20.1 GB)
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
train: /content/images/train # train images (relative to 'path') 128 images
val: /content/images/val # val images (relative to 'path') 128 images
test: # test images (optional)
# Classes
nc: 2 # number of classes
names: ['with_mask', 'without_mask'] # class names

Step-4
After creating the custom.yaml configuration file, upload the file to Google Colab in the yolov5 folder -> data as a follow:

Step-5
Next, setting the parameters on yolov5 training, where there are several parameters in the form of image size 640, batch size 32, epochs 20 and custom.yaml data.

The Next step is to train the yolov5 algorithm for the face-mask dataset with the labels with_mask and without_mask. There are several parameters that need to be configured as follows:

!python train.py --img 640 --batch 32 --epochs 20 --data custom.yaml --weights yolov5s.pt --cache
  1. train.py file is a file that contains commands to perform the training process using the yolov5 algorithm

2. img 640 is the pixel of the image size to be trained

3. batch 32 and epoch 20 are hyperparameters used in the training process, batches for a number of samples are processed before the model is updated, while epochs are used to determine how many times the learning algorithm will work to process the entire training dataset.

4. data custom.yaml is a file that contains the location where the dataset is located.

5. weights yolov5s.pt is the type of yolov5 algorithm model used. for more details can access it here.

then execute the cell on Google Colab to get the following results:

Step-6
Look the training results in the folder yolov5 -> runs -> train ->exp, the training results will be obtained in the form of an image that has been labeled with a bounding box in the form of with_mask and without_mask. For more details, see the following video tutorial.

The next step is to create a face-mask detection object in real-time. in this tutorial, I used a library called OpenCV to implement a machine learning model into object detection. OpenCV is a library that can enable live streaming on object detection by capturing frame-by-frame on detected objects by integrating machine learning models into it.

For the implementation of model making in real-time, the first step is to prepare a model that has been previously trained using the YOLO algorithm. Next, clone the yolov5 Github repository on a local computer, go to the yolov5 folder, and install the requirements so the system can run.

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt # install

Next, implement the prepared model results into real-time face mask detection objects using OpenCV, with the following steps:

1. Import library torch, numpy, dan cv2
2. Load model using pytorch
3. Run the condition to open the webcam and run the webcam video for face mask object detection.
OpenCV code for face mask detection
Here’s the result:
The result of the detection of face-mask

Reference:

  1. https://github.com/ultralytics/yolov5
  2. https://youtu.be/GRtgLlwxpc4
  3. https://jurnal.polibatam.ac.id/index.php/JAIC/article/view/3484/1616

--

--