Simple YOLOv5 Part 2 : Train Custom YOLOv5 Model

Chanon Krittapholchai
7 min readDec 30, 2021

--

This Article is about how to train custom model with YOLOv5.

Ultralytics’s YOLOv5 Logo from their Github repository.

This Article is the 2nd part of 2 parts articles about “Simple YOLOv5” ;

  1. Deploy YOLOv5 on Windows
  2. Train Custom YOLOv5 Model

This “Simple YOLOv5” is refer to the case that I deployed a trial object detection for my friend who want to improve a process of his family’s business without much investment and coding.

And I’m going to use Mr.Pompompurin as a target for this article instead of the real target in my case.

Actually, he is not a bear….

YOLOv5 already has official tutorial for train custom model in this link below ;
*Which is really good and easy ❤

But in my case, I have to implement something that even operation team can train their custom models by themselves and this article is about how I trained them to do so.

I’ll put links to every source at the end of this article.

This article will be divided into 6 parts ;

  1. Prepare Dataset for your custom model
  2. Create folder structure
  3. Split pictures : train/val/test
  4. Label your data
  5. Training your custom model on Google’s Colab
  6. Deploy your custom model

1. Prepare Dataset for your custom model

This step is simple in concept but can be very hard in practical.
All you need is to get a lot of pictures of thing(s) that you want to detect.

For this case, I just use pictures found on Google.

A lot to use :3

And also some screen shot from this video for testing.

And for some cases, you might want to take video(s) of your targets and split them into pictures with open-cv.

import cv2video = cv2.VideoCapture('myvideo.mp4')
count_frame = 0
while video.isOpened():
ret, frame = video.read()
if not ret :
print('End of Video')
break
image_name = '{}.jpg'.format(('0000' + str(count_frame))[-5:])
cv2.imwrite(image_name ,frame)
count_frame += 1

2. Create folder structure

This is the most confuse part for my case.

Create a project folder with sub-folders inside like this

project_folder
- data
|- images
|- train
|- {your training image files}
|- val
|- {your validation image files}
|- test (optional)
|- {your testing image files}
|- labels
|- train
|- {your training label files}
|- val
|- {your validation label files}
|- test (optional)
|- {your testing label files}
My initial project folder

And in that project folder, create 2 text files ;

  1. data.yaml
  2. labels.txt

Also, put these content in those text files ;

for “data.yaml” ;

train: /content/datasets/data/images/train
val: /content/datasets/data/images/val
test: /content/datasets/data/images/test

nc: 2
names : ['label1' , 'label2']
  • nc : number of classes that you want to detect
  • names : list of classes that you want to detect

for “labels.txt” ;

label1
label2
  • It is a list of classes that you want to detect, same as “data.yaml”’s names

Finally, you folders should look like this ;

I will create a model that detect 3 classes

3. Split pictures : train/val/test

For this step, we have to split pictures from step 1 into these 3 folders ;

  1. data/images/train : pictures that YOLO use for create model
  2. data/images/val : picture that YOLO use for re-check during the training
  3. data/images/test : picture that “YOU” use for re-check the result

For my case, in this step, I suggest my operation to manually split pictures but this may depend on your case or pictures.

My magic ratios train/val/test are 70/20/10 or 60/20/20.

My split dataset

You may need more than 32 pictures to get a good model.

4.Label your data

This is the longest step for my case.

There are many tools that you can use to labels your data but I’ll suggest 2 tools that I have my operation used ;

4.A) LabelImg

LabelImg is an open-source annotation tool. You can visit LabelImg’s home page from link below ;

For windows users, I’d recommend to open this link below and download “windows_vXXXX.zip”

Unzip that zip file, open the text file “predefined_classes.txt” and replace it’s content with content from your “labels.txt” file from step 2.

Copy labels from step 2 to file in LabelImg’s folder

Then, open “labelImg.exe” to begin annotation.

Don’t forget to click on “Pascal/VOC” to make it change into “YOLO”.

Click to change into YOLO format

I recommend to use “Open Dir” to open images folder and use “Change Save Dir” to save label files into labels folder.

This picture from Youtube in step 1
Label files and image files

4.B) makesense.ai

This tool is very easy to use, just open this link below ;

Upload image files and use labels from file “labels.txt” from step 2.
Then annotate and export annotation file in YOLO format.

After finish labeling, zip your project folder and named it as “datasets.zip”

Zip and rename as “datasets.zip”

5. Training your custom model on Google’s Colab

Open this link >> HERE

For this step, I copy notebook from YOLO’s page and make some simplified.

My simplified notebook

5.A) Press “file” and “Save a copy in drive” to copy to your drive

Copy to your drive

5.B) Click folder icon on the left and upload your “datasets.zip”

Upload “datasets.zip” to Google’s Colab

5.C) Look for “Run Cell Below” and run, description of each cell already in that notebook

Look for “Run Cell Below” and run every cell

There is a evaluation cell than we can see how good our model is.
In my case, it didn’t perform well but that’s enough for a demo.

Custom model can detect Pompom but can’t detect Cinamon, from Youtube in step 1

You can improve performance of your model as wrote on YOLOv5’s official page below ;

5.D) After run the last cell in that notebook, you will download your model.

Run last cell to download your model

6. Deploy your custom model

After you get your custom model from step 5, just put that model in the same folder as your run script from part 1 of this article. And then make some adjustments as picture below ;

Only 3 parts that need to modify

Or just like this ;

Test result of your custom model ;

Yup, there they are…

Hopefully, this article will be some useful for you.

--

--