TRAIN A CUSTOM YOLOv4-tiny OBJECT DETECTOR USING GOOGLE COLAB

Tutorial for beginners

Techzizou
Analytics Vidhya
Published in
14 min readFeb 24, 2021

--

In this tutorial, we will be training our custom detector for mask detection using YOLOv4-tiny and Darknet. YOLOv4-tiny is preferable for real-time object detection because of its faster inference time.

My YouTube video on this!

HOW TO BEGIN?

  • ✅Subscribe to my YouTube channel 👉🏻 https://bit.ly/3Ap3sdi 😁😜
  • Open my Colab notebook on your browser.
  • Click on File in the menu bar and click on Save a copy in drive. This will open a copy of my Colab notebook on your browser which you can now use.
  • Next, once you have opened the copy of my notebook and are connected to the Google Colab VM, click on Runtime in the menu bar and click on Change runtime type. Select GPU and click on save.

FOLLOW THESE 12 STEPS TO TRAIN AN OBJECT DETECTOR USING YOLOv4-tiny

( NOTE: Except for the custom config file and the pre-trained weights file steps, all other steps are the same as in the previous custom YOLOv4 training tutorial(https://medium.com/analytics-vidhya/train-a-custom-yolov4-object-detector-using-google-colab-61a659d4868). I have also made one other change for this YOLOv4-tiny Tutorial, where we will be cloning the Darknet git repository onto the Colab cloud VM itself, unlike the previous YOLOv4 tutorial where we cloned the repository in a folder on our google drive.)

Google Colab instances have faster memory than google drive. If we access files from google drive which has a larger access time, we will get low speed. So here we will copy the files to colab instance and then train our detector model which will make the process faster.

  1. Clone the Darknet git repository onto the Colab VM
  2. Create yolov4-tiny and training folders in your google drive
  3. Create & upload the files we need for training ( i.e. “obj.zip” , “yolov4-tiny-custom.cfg”, “obj.data”, “obj.names” and “process.py” ) to your drive
  4. Mount drive and link your folder
  5. Make changes in the Makefile to enable OPENCV and GPU
  6. Run make command to build darknet
  7. Copy the files “obj.zip”, “yolov4-tiny-custom.cfg”, “obj.data”, “obj.names”, and “process.py” from the yolov4-tiny folder to the darknet directory in Colab VM
  8. Run the process.py python script to create the train.txt & test.txt files
  9. Download the pre-trained YOLOv4-tiny weights
  10. Train the detector
  11. Check performance
  12. Test your custom Object Detector

LET'S BEGIN !!!

Original Video by cottonbro from Pexels

1) Clone Darknet git repository

Clone the Darknet git repository on the Colab VM

!git clone https://github.com/AlexeyAB/darknet
Cloned Darknet git repo on Colab VM

2) Create ‘yolov4-tiny’ and ‘training’ folders in your drive

Create a folder named yolov4-tiny in your drive. Next, create another folder named training inside the yolov4-tiny folder. This is where we will save our trained weights (This path is mentioned in the obj.data file which we will upload later)

3) Create & upload the following files which we need for training a custom detector

a. Labeled Custom Dataset
b. Custom cfg file
c. obj.data and obj.names files
d. process.py file (to create train.txt and test.txt files for training)

I have uploaded my custom files for mask detection on my GitHub. I am working with 2 classes i.e. “with_mask” and “without_mask”.

Labeling your Dataset

Input image (Image1.jpg)

Original Photo by Ali Pazani from Pexels

You can use any software for labeling like the labelImg tool.

labelImg GUI for Image1.jpg

I use an open-source labeling tool called OpenLabeling with a very simple UI.

OpenLabeling Tool GUI

Click on the link below to learn more about the labeling process and other software for it:

NOTE : Garbage In = Garbage Out. Choosing and labeling images is the most important part. Try to find good quality images. The quality of the data goes a long way towards determining the quality of the result.

The output YOLO format labeled file looks as shown below.

Image1.txt

3(a) Create and upload the labeled custom dataset “obj.zip” file to the “yolov4-tiny” folder on your drive

Put all the input image “.jpg” files and their corresponding YOLO format labeled “.txt” files in a folder named obj.

Create its zip file obj.zip and upload it to the yolov4-tiny folder on your drive.

obj folder containing both the input image files and the YOLO labeled text files

3(b) Create your custom config file and upload it to the ‘yolov4-tiny’ folder on your drive

Download the yolov4-tiny-custom.cfg file from darknet/cfg directory, make changes to it, and upload it to the yolov4-tiny folder on your drive.

You can also download the custom config files from the official AlexeyAB Github.

Make the following changes in the custom config file:

  • change line batch to batch=64
  • change line subdivisions to subdivisions=16
  • set network size width=416 height=416 or any value multiple of 32
  • change line max_batches to (classes*2000 but not less than the number of training images and not less than 6000), f.e. max_batches=6000 if you train for 3 classes
  • change line steps to 80% and 90% of max_batches, f.e. steps=4800,5400
  • change [filters=255] to filters=(classes + 5)x3 in the 2 [convolutional] before each [yolo] layer, keep in mind that it only has to be the last [convolutional] before each of the [yolo] layers.
  • change line classes=80 to your number of objects in each of 2 [yolo]-layers

So if classes=1 then it should be filters=18. If classes=2 then write filters=21.

You can tweak other parameter values too like the learning rate, angle, saturation, exposure, and hue once you’ve understood how the basic training process works. For beginners, the above changes will suffice.

NOTE: What are subdivisions?

  • It is the number of mini-batches we split our batch into.
  • Batch=64 -> loading 64 images for one iteration.
  • Subdivision=8 -> Split batch into 8 mini-batches so 64/8 = 8 images per mini-batch and these 8 images are sent for processing. This process will be performed 8 times until the batch is completed and a new iteration will start with 64 new images.
  • If you are using a GPU with low memory, set a higher value for subdivisions ( 32 or 64). This will obviously take longer to train since we are reducing the number of images being loaded and also the number of mini-batches.
  • If you have a GPU with high memory, set a lower value for subdivisions (16 or 8). This will speed up the training process as this loads more images per iteration.

3(c) Create your “obj.data” and “obj.names” files and upload them to your drive

obj.data

The obj.data file has :

  • The number of classes.
  • The path to train.txt and test.txt files that we will create later.
  • The path to obj.names file which contains the names of the classes.
  • The path to the training folder where the training weights will be saved.

obj.names

Has objects' names — each in a new line. Make sure the classes are in the same order as in the class_list.txt file used while labeling the images so the index id of every class is the same as mentioned in the labeled YOLO txt files.

3(d) Upload the process.py script file to the “yolov4-tiny” folder on your drive

(To divide all image files into 2 parts. 90% for train and 10% for test)

This process.py script creates the files train.txt & test.txt where the train.txt file has paths to 90% of the images and test.txt has paths to 10% of the images.

You can download the process.py script from my GitHub.

**IMPORTANT: The “process.py” script has only the “.jpg” format written in it, so other formats such as “.png”,“.jpeg”, or even “.JPG”(in capitals) won’t be recognized. If you are using any other formats, make changes in the process.py script file accordingly.

process.py script

Now that we have uploaded all the files, our yolov4-tiny folder on our drive should look like this:

4) Mount drive and link your folder

Mount drive

%cd ..
from google.colab import drive
drive.mount('/content/gdrive')

Link your folder

Run the following command to create a symbolic link so that now the path /content/gdrive/My\ Drive/ is equal to /mydrive

!ln -s /content/gdrive/My\ Drive/ /mydrive

5) Make changes in the makefile to enable OPENCV and GPU

(Also set CUDNN, CUDNN_HALF, and LIBSO to 1)

%cd darknet/
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
!sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
!sed -i 's/LIBSO=0/LIBSO=1/' Makefile

6) Run make command to build darknet

!make

7) Copy all the files from the ‘yolov4-tiny' folder to the ‘darknet’ directory in Colab VM

The current working directory is /content/darknet

Clean the data and cfg folders except for the labels folder inside the data folder which is required for writing label names on the detection boxes.

So just remove all other files from the data folder and completely clean the cfg folder as we already have our custom config file in the yolov4-tiny folder on our drive.

This step is optional.

%cd data/
!find -maxdepth 1 -type f -exec rm -rf {} \;
%cd ..
%rm -rf cfg/
%mkdir cfg

7(a) Copy the obj.zip file from your drive into the darknet directory and unzip it into the data folder in the Colab VM

!cp /mydrive/yolov4-tiny/obj.zip ../!unzip ../obj.zip -d data/

7(b) Copy your yolov4-tiny-custom.cfg file so that it is now in /darknet/cfg/ folder in the Colab VM

!cp /mydrive/yolov4-tiny/yolov4-tiny-custom.cfg ./cfg

7(c) Copy the obj.names and obj.data files so that they are now in /darknet/data/ folder in the Colab VM

!cp /mydrive/yolov4-tiny/obj.names ./data
!cp /mydrive/yolov4-tiny/obj.data ./data

7(d) Copy the process.py file into the current darknet directory in the Colab VM

!cp /mydrive/yolov4-tiny/process.py ./

8) Run the process.py python script to create the train.txt & test.txt files inside the data folder

!python process.py

List the contents of the data folder to check if the train.txt and test.txt files have been created

!ls data/

The above process.py script creates the two files train.txt and test.txt where train.txt has paths to 90% of the images and test.txt has paths to 10% of the images. These files look like as shown below.

train.txt & test.txt files

IMPORTANT NOTE:

Make sure to download both these “train.txt” and “test.txt” files once they are created to use in the future in case you get disconnected. Since we are creating these files in the colab VM, these will get deleted if you lose your session. So when you restart training from the last saved checkpoint as mentioned in step 10, upload these files in the same location where they are created in step 8 i.e. darknet/data directory. You don’t have to create these files every time using “process.py” script.

9) Download the pre-trained YOLOv4-tiny weights

Here we use transfer learning. Instead of training a model from scratch, we use pre-trained YOLOv4-tiny weights which have been trained up to 29 convolutional layers. Run the following command to download the YOLOv4-tiny pre-trained weights file.

!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.conv.29

10) Training

Train your custom detector

For best results, you should stop the training when the average loss is less than 0.05 if possible or at least constantly below 0.3, else train the model until the average loss does not show any significant change for a while.

!./darknet detector train data/obj.data cfg/yolov4-tiny-custom.cfg yolov4-tiny.conv.29 -dont_show -map

The map parameter here gives us the Mean Average Precision. The higher the mAP the better it is for object detection.

You can visit the official AlexeyAB Github page which gives a detailed explanation of when to stop training. Click on the link below to jump to that section.

To restart your training (In case the training does not finish and you get disconnected)

If you get disconnected or lose your session, you don’t have to start training your model from scratch again. You can restart training from where you left off. Use the weights that were saved last. The weights are saved every 100 iterations as yolov4-tiny-custom_last.weights in the yolov4-tiny/training folder on your drive. (The path we gave as backup in “obj.data” file).

To restart training from the last saved checkpoint, run steps 1, 4, 5, 6, 7 and for step 8, simply upload the same “train.txt” & “test.txt” files we downloaded after creating them the first time in step 8 as we will be using those same files every time for training. Next, run the following command:

!./darknet detector train data/obj.data cfg/yolov4-tiny-custom.cfg /mydrive/yolov4-tiny/training/yolov4-tiny-custom_last.weights -dont_show -map

Note: Since I am copying the files to the darknet dir inside the colab VM these files will get lost whenever you lose your session, so you will have to copy these files every time into the darknet dir inside colab VM using step 7.

11) Check performance

Define helper function imShow

def imShow(path):import cv2
import matplotlib.pyplot as plt
%matplotlib inline
image = cv2.imread(path)
height, width = image.shape[:2]
resized_image = cv2.resize(image,(3*width, 3*height), interpolation = cv2.INTER_CUBIC)
fig = plt.gcf()
fig.set_size_inches(18, 10)
plt.axis(“off”)
plt.imshow(cv2.cvtColor(resized_image, cv2.COLOR_BGR2RGB))
plt.show()

Check the training chart

You can check the performance of all the trained weights by looking at the chart.png file. However, the chart.png file only shows results if the training does not get interrupted i.e. if you do not get disconnected or lose your session. If you restart training from a saved point, this will not work.

imShow('chart.png')

If this does not work, there are other methods to check your performance. One of them is by checking the mAP of the trained weights.

Check mAP (mean average precision)

You can check mAP for all the weights saved every 1000 iterations for eg:- yolov4-tiny-custom_4000.weights, yolov4-tiny-custom_5000.weights, yolov4-tiny-custom_6000.weights, and so on. This way you will know which weights file will give you the best result. The higher the mAP the better it is.

Run the following command to check the mAP for a particular saved weights file where xxxx is the iteration number for it.(eg:- 4000,5000,6000,…)

!./darknet detector map data/obj.data cfg/yolov4-tiny-custom.cfg /mydrive/yolov4-tiny/training/yolov4-tiny-custom_xxxx.weights -points 0

12) Test your custom Object Detector

Make changes to your custom config file to set it to test mode

  • change line batch to batch=1
  • change line subdivisions to subdivisions=1

You can do it either manually or by simply running the code below

%cd cfg
!sed -i 's/batch=64/batch=1/' yolov4-tiny-custom.cfg
!sed -i 's/subdivisions=16/subdivisions=1/' yolov4-tiny-custom.cfg
%cd ..

Run detector on an image

Upload an image to your Google Drive to test.

Run your custom detector on an image with this command. (The thresh flag sets the minimum accuracy required for object detection)

!./darknet detector test data/obj.data cfg/yolov4-tiny-custom.cfg /mydrive/yolov4-tiny/training/yolov4-tiny-custom_best.weights /mydrive/mask_test_images/image1.jpg -thresh 0.3imShow('predictions.jpg')
Original Photo by Norma Mortenson from Pexels

Run detector on webcam images

For running detector on images captured by a webcam run the following code. This is the camera code snippet provided by Colab except for the last two lines which run the detector on the saved image.

Detection on webcam image

Run detector on a video

Upload a video to your Google Drive to test.

Run your custom detector on a video with this command. (The thresh flag sets the minimum accuracy required for object detection). This saves the output video with the detections in your output path

!./darknet detector demo data/obj.data cfg/yolov4-tiny-custom.cfg /mydrive/yolov4-tiny/training/yolov4-tiny-custom_best.weights -dont_show /mydrive/mask_test_videos/test1.mp4 -thresh 0.7 -i 0 -out_filename /mydrive/mask_test_videos/results1.avi
Original Video by Pavel Danilyuk from Pexels

Run detector on a live webcam

First Import dependencies, define helper functions and load your custom YOLOv4-tiny files, and then run the detector on a webcam.

Run the code below.

Detection on live webcam

NOTE:

The dataset I have collected for mask detection contains mostly close-up images. For more long-shot images you can search online. There are many sites where you can download labeled and unlabeled datasets. I have given a few links at the bottom under Dataset Sources. I have also given a few links for mask datasets. Some of them have more than 10,000 images.

Though there are certain tweaks and changes we can make to our training config file or add more images to the dataset for every type of object class through augmentation, we have to be careful so that it does not cause overfitting which affects the accuracy of the model.

For beginners, you can start simply by using the config file I have uploaded on my GitHub. I have also uploaded my mask images dataset along with the YOLO format labeled text files, which although might not be the best but will give you a good start on how to train your own custom detector model using YOLO. You can find a labeled dataset of better quality or an unlabeled dataset and label it yourself later.

Original Video by Max Fischer from Pexels

My GitHub

I have uploaded my custom mask dataset and all the other files needed for training Yolov4-tiny detector on my GitHub link below.

My Labeled Dataset (obj.zip)

My Colab notebook for YOLOv4-tiny training

If you found this article helpful, please subscribe and support my channel on YouTube 🖖

My YouTube video on this!

CREDITS

References

Dataset Sources

You can download datasets for many objects from the sites mentioned below. These sites also contain images of many classes of objects along with their annotations/labels in multiple formats such as the YOLO_DARKNET text files and the PASCAL_VOC XML files.

Mask Dataset Sources

I have used these 3 datasets for my labeled dataset:

More Mask Datasets

Video Sources

Don’t forget to leave a 👏

Have a great day !!! ✌

♕ TECHZIZOU ♕

Original Video by Nothing Ahead from Pexels

--

--