Working with YOLOv5

Bharath Sivakumar
Jun 28, 2020 · 8 min read

In this blog post, we are going to talk about how to set up YOLOv5 and get started. If you haven’t come across YOLOv5 already, here is a brief write-up about it explaining the idea behind its creation and its performance by the creators, Ultralytics:

One major advantage of YOLOv5 over other models in the YOLO series is that YOLOv5 is written in PyTorch from the ground up. This makes it useful for ML Engineers as there exists an active and vast PyTorch community to support.

YOLOv5 is also much faster than all the previous versions of YOLO. In addition to this, YOLOv5 is nearly 90 percent smaller than YOLOv4. This means YOLOv5 can be deployed to embedded devices much more easily. To know more about some of the advantages of YOLOv5, please refer to the above blog post.

SETTING UP YOLOv5

Let’s dive straight into how to set up YOLOv5 on your system. The following guide will show how you can get started with setting up YOLOv5 on your system (either Windows 10 or Ubuntu 18.04). This setup might also work for other versions of Windows and Ubuntu, but it has not been tested.

If you are not interested in setting YOLOv5 up on your local system and just want access to an up-to-date working environment with YOLOv5, then check out this Google Colab notebook tutorial created by Ultralytics themselves:

Let’s start with setting it up on your local system. If you are using a windows system, make sure you have git installed and have it set in your path. And for a Linux system, similarly make sure you have git installed and are able to use git commands from the Ubuntu terminal. Now change directory to the folder where you want the YOLOv5 repository to be installed and clone the Yolov5 repo:

git clone https://github.com/ultralytics/yolov5.git

This will create a new folder called “yolov5” in the current directory. Now change your directory to yolov5. Now make sure that you have a virtual environment installed and activated. If you haven’t got one already installed, then use the following command to install a python virtual environment:

sudo apt-get install python3-venv

To create a virtual environment now use the command:

python3 -m venv <your_environment>

Where “<your_environment>” is the name of the environment that you change to your liking. I used the name “yolov5_environment”. Now you need to activate your environment. To do so use the following command:

source your_environment/bin/activate

Well you have successfully activated your virtual environment. Now, doing the same in a Windows system is slightly tricky. I used the Visual Studio Code Editor and used it to create my python environment and then I had to cd into the directory where I wanted to clone my yolov5 repository. To figure out how to install a Virtual environment in Visual Studio code, you can use the following tutorial:

Once you have activated the virtual environment on your VS Code, use the VS Code terminal to get into the yolov5 directory that you have just cloned.

The reason for activating a python virtual environment is so to prevent dependencies from clashing. You might need different versions of the same library like Numpy for example for different projects, so you create virtual environments for each project and install all the required libraries for a project in that virtual environment.

After you get into the cloned yolov5 repository, if you are using the windows operating system, edit the requirements.txt file in the yolov5 folder and replace the line “pycocotools” with the line “pycocotools -windows”. This is not needed for Linux. Now to install the dependencies needed for yolov5, just run the following command:

pip install -U -r requirements.txt

This command installs all the libraries that are present in the “requirements.txt” text file and these are all that you need for working with yolov5. If the above command runs without any problem, you can directly move to the “DETECTION USING YOLOv5” section, where we use YOLOv5 for detection.

But you might encounter some problems when running this command. If that is the case, stick to this section and read on to understand how to solve it. You might, for example, encounter problems because your hardware is too old or your system cannot find the numpy and cython libraries that were installed when running the above command, because you have multiple python versions and your system can’t locate the python version of your virtual environment, which are needed for all the subsequent dependencies. So if that indeed is the case, retry and run the following two commands one after another first if you are running on an Ubuntu system:

pip install numpy==1.17pip install cython

Now try running the pip install -U -r requirements.txt command again. This should most probably successfully install all the dependencies and you should be able to run yolov5. Now if the above command works successfully, you can directly move onto the “DETECTION USING YOLOv5”
But, if you encounter a problem once again, it’s most likely due to some problem with installing PyTorch. Edit your requirements.txt file and remove the line “torch>=1.5” from the text file and again run

pip install -U -r requirements.txt

The command should run successfully without any error. Now all you need to install is PyTorch. To do so first go to:

As you scroll down, you should get to this section called “QUICK START LOCALLY” where the respective commands for each operating system and CUDA versions are given. Choose the “Stable(1.5)” option for “PyTorch” build. I didn’t have a GPU either in my Linux system or in my Windows system and therefore didn’t have “CUDA” installed and chose that option to be “None”. If you have a GPU and want to see if the GPU has CUDA support or not go to this link:

If your GPU isn’t supported, choose “CUDA” as “None”.
I used “pip” as the “package” used for installation. So for both my Linux and Windows system I got the following command:

pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html

And here we go, the YOLO v5 setup on your machine is ready for the action. Let’s now move onto the detection phase and start playing with YOLO v5.

DETECTION USING YOLOv5

To start playing around Yolov5, simply you have to run the following command right of your terminal after you get into the yolov5 directory that you have cloned earlier:

python detect.py — source ./inference/images/ — weights yolov5s.pt — conf 0.4

Now, let’s understand what this command does?

This command just runs the “detect.py” program with a few command line arguments. It basically runs the YOLOv5 algorithm on all the images present in the ./inference/images/ folder and the weights is set as the yolov5s.pt weight folder located in your yolov5 directory. You don’t have to explicitly download this file in the yolov5 folder since the command is designed to download the same when it is run if it doesn’t but you might still face a problem. If there is any error saying “yolov5s.pt” not found, go to the following link:

Download the file called “yolov5s.pt” and save it in the yolov5 directory in your local system. You can also download this file right off the command line in your Linux system using the command:

wget — no-check-certificate ‘https://docs.google.com/uc?export=download&id=1R5T6rIyy3lLwgFXNms8whc-387H0tMQO' -O yolov5s.pt

This should save the yolov5s.pt file in that drive folder into a file called yolov5s.pt in the current directory in which you are in, which should be the yolov5 directory. Once its done, run the following command again:

python detect.py — source ./inference/images/ — weights yolov5s.pt — conf 0.4

All the images in the “/inference/images/” folder in the yolov5 directory are put through the YOLOv5 algorithm and the images, with boxes drawn over them are all saved in the “/inference/output” folder which is created once the program execution is completed. This weights file was trained on the COCO dataset and all the boxes are drawn, if an object in the images matches with some class in the COCO dataset with a confidence of greater than or equal to the “conf” command line argument which is set as 0.4 in our case. We can also just run the command:

python detect.py

Without any extra parameters. But that won’t be a problem since by default those arguments are the same as the ones we set in the command:

python detect.py — source ./inference/images/ — weights yolov5s.pt — conf 0.4

If you want to change the default confidence score used by the program to say 0.25, then you can do so by changing the following line in my “detect.py” program:

parser.add_argument(' — conf-thres', type=float, default=0.4, help='object confidence threshold')

to:

parser.add_argument(' — conf-thres', type=float, default=0.25, help='object confidence threshold')

Similarly, if you want to change other default parameters in the program, you are free to do so.

We can also give a single image as an input in the program. In such a case, the program again creates a new “/inference/output” folder in the yolov5 directory and saves the single image we gave as input with the appropriate boxes drawn over it. If you want to change where you want the output images to be saved or the default location of the input images etc., you can go to the “detect.py” program and customize the program to your needs.

To give you a sense of what the “detect.py” program does, here are the inputs and outputs in succession for the input images present in the inference/images folder by default:

zidane.jpg
zidane.jpg with predicted objects and confidence scores
bus.jpg
bus.jpg with predicted objects and confidence scores

I hope you were able to get YOLOv5 started and I hope you enjoy tweaking the program to your needs.

Quantrium.ai

Experiences in the tech kitchen at Quantrium

Quantrium.ai

This is Quantrium’s official tech blog. A blog on how technology enables us to develop great software applications for our clients.

Bharath Sivakumar

Written by

A Machine Learning enthusiast who wants to make Machine Learning tools accessible to everybody

Quantrium.ai

This is Quantrium’s official tech blog. A blog on how technology enables us to develop great software applications for our clients.