Yolov5 Custom data model tutorial with image augumentation

MartynaM
7 min readJun 16, 2023

--

Object-detection is now one of the most popular topic over the word. A lot of countries already using this technology to streamline various aspects of life. With the increasing popularity of AI, many people are reading and exploring possibilities in this field. The YOLO (You Only Look Once) algorithm has played a significant role in this trend. YOLO is now one of the most popular object detection algorithms, earning a high reputation over the years and being used in numerous active systems.

YOLOv5, the focus of this article, has recently gained even more popularity than before. This is certainly due to the jump in popularity around AI, but also because it is much easier to use than previous versions.
However, despite its ease of use, I still know that many users have problems with using it more freely, so I will try to show how to train your own model based on YOLOv5 in a quick and easy way with the custom dataset.

Part 1 Preparing own dataset

The first and most important point is to create your own data set because without it, it’s obvious that you can’t move on. Depending on the objects you want to detect, collecting diverse images is essential for better model performance. Even with a small number of images, it’s possible to augment the dataset artificially. For example, I personally aim to create a model for detecting degus, utilizing personal photos in the dataset.

These are my new residents so I will use many personal photos in the data collection. The second step is to label the images and more so the objects we want to detect. Data annotation is the process of labeling individual elements of training data to help machines understand what exactly is in it and what is important. Nowadays we can find many different applications and tools to perform this process. Even Yolo itself works with robolow to more powerfully automate the creation of models. However, a fairly simple tool will be used, which is labelImg, which can be downloaded directly from the author: https://github.com/heartexlabs/labelImg

After downloading and opening the tool, you need to select a folder with photos or just photos that will be labeled. Also remember to change the format of saving bound boxes to YOLO, this is important because without it the model will not be able to be learned.

Labeling data

Once the area is selected and verified, a text file suitable for the photo will automatically be saved. As well the area is selected and verified, a text file will automatically be saved that contains the coordinates of the selected areas and the class number.

<object-class-id> <x> <y> <width> <height>

Sometimes the class number does not write down as we would like. Therefore, I created a code that can be used to change the class if necessary, it is available on my github

Part 2 Data augmentation

As I mentioned at the beginning of the article, it is possible to create more data from the data we have so far. The process that allows this is data augumentation which is a techniuqe of increasing the training set by creating modified copies of a dataset using existing data. Its giving posibility to make image darker or brighter, flip, rotate with every directions, blur or making image more sharpen etc. There is a lot of options which making a file like new for a machine eyes. In this process helping a lot library Albumentations who is created especially for this:
https://albumentations.ai/ In the web we can find extensive informations about this which I recomended to better understand. In carrying out the process of increasing the bounding box to detect objects, the most important thing is how you want to transform your image. Defining a pipeline is a simple process, just use the library to select transformations and assign the necessary parameters to them.

HorizontalFlip_CLAHE = A.Compose([
A.HorizontalFlip(p=0.6),
A.CLAHE(p=1)
], bbox_params=A.BboxParams(format='yolo'))

In the pipeline shown above, operations such as HorizontalFlip with probability 0.6, and CLASHE with probability 1 will be performed, and the bounding boxes parameters will be re-created in YOLO format after the transformation. On my github, I created code that is used to expand the dataset. Using pipline changing the structure, you can create different variants of images and save them many times.

The same photo after using augumentation pipline and before this

Part 3 Split dataset for train/test/validation

Each model to be trained must have its dates divided into a teaching, training and possibly test collection.

  • Learning set (“train”) — is a dataset that we use to learn the algorithm.
  • Validation set (“val” — validation) — this is a set of data that we use to test the model during its training
  • Test set (“test”) — this is a set of data, previously not used in the set of learning and validation set, which tests how the model performs in terms of recognizing, for example, objects in CNNs.

The amount of data to be divided for each category in many articles is given about 70 percent for the train and about 30 for the validation, the test set is often not necessary so its amount is adjusted according to your own needs. This collection can be divided manually, but it is a better idea to separate it in the program to avoid losing any files or unevenly dividing the data. In the code I created, available on github the files were peaked, matched in pairs so that the text file and the photo fit together, and then reshuffled so that they were fairly random. They were then allocated in my case to train = 0.75%, test= 0.03%, val= 0.22%. After that, the paths of the folders to which they remained saved in such an amount were created.

Data split

Part 4 YOLOv5 Model configuration and training

This part is very simple and quick to do because on github Yolov5 has prepared a specially prepared google colab file on which everything is described step by step: https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/#clearml-logging-and-automation-new

Nevertheless, I will personally explain how to go through it. First we need to download the repository in google colab and after that there will be a view into it.

!git clone https://github.com/ultralytics/yolov5  # clone
%cd yolov5
%pip install -qr requirements.txt # install

Inside the repository we need to put the data, it can just be uploaded from computer, or like me put it on google drive to download it in coolab. Inserting to disk allows you to operate on a larger dataset, because by inserting it manually it is limited in size.

!unzip {data.path}.zip

Next step is to create a .yaml file with parameters that match our data. You don’t have to create it manually, just rewrite the coco128.yaml file so that the number of classes and their names match, and the folder paths are arranged accordingly.

The only thing left to run was the command to train the model, on which you need to set parameters such as:

  • img — resizes the image to (in this case to 640 pixels)
  • batch — determines the size of the batch of processed images ( in this case, it processes in one epoch x times a batch with 40 images)
  • epochs 100 — specifies the number of training epochs (in this case, 100 which means that the model will repeat learning for 100 times)
  • data — sets the path to the .yaml file
  • cfg: model configuration
  • weights — sets the path of the weights (in this case uses the weights available in the yolov5l model)
  • nosave: only save the final checkpoint
  • cache — uses a cache for images

However, not all of them need to be defined

!python train.py --img 640 --batch 40 --epochs 100 --data {file}.yaml --weights yolov5l.pt --cache

After the training process we can download weight which can be used to test the model on images or videos.

#export your model's weights for future use
from google.colab import files
files.download('./runs/train/exp/weights/best.pt')

Part 5 Testing model

Having the model’s weights downloaded, you can easily test its performance. The authors of yolov5 also allow you to quickly test the capabilities on other data. All you have to do is download the contet from github and then upload the model using the pytorch library.

#only once
pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt

import torch

model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.pt', force_reload=True)

After uploading the image, the results are immediately visible.

image='.jpg'
results = model(image)
print(results)
results.show()

Conclusion

I hope you liked this tutorial and will allow you owing to the available options, to train many models in different ways.

Don’t forget to visit my github: https://github.com/MartynaM11 where you can find all the helpful files explained in this article.

Be ready for many new articles and tutorials

--

--