Number plate detection on Indian car vehicles using YOLOv2

Ravi Theja
6 min readMar 31, 2018

In this article I will be discussing about how I used YOLOv2 for Number plate detection on Indian car vehicles. The article gives you an idea about how to prepare data, train and test using YOLO v2.

I thank Jeremy Howard and @fastdotai classmates for inspiring me to write this blog post.

Context of the problem

Even though automatic number plate detection and recognition has already been widely tested and adopted by a lot of countries for surveillance purposes, but in India where the size of number plates on Indian vehicles are not fixed and the CCTV cameras which are used for surveillance purpose are not of high resolution — ANPR remains a challenge to be solved. I tried solving the detection problem using YOLOv2.

Take a look at the following results before discussing about how to use YOLO v2.

test_image_1.jpg
test_image_2.jpg
test_image_3.jpg

This post is divided into following stages:

  1. Clone Darknet yolov2.
  2. Data Annotation.
  3. Data preparation as needed by YOLO.
  4. Configuration files preparation.
  5. Training.
  6. Testing.
  7. Result on live video.

Clone darknet yolov2

Get the darknet yolov2 from GitHub. Run the following command from the terminal to clone the repository.

git clone https://github.com/AlexeyAB/darknet.git

Data Annotation

I used BBox-Label-Tool for annotating the data, you can try Microsoft VoTT also for annotating the data. So clone the BBox-Label-Tool repository.

git clone https://github.com/puzzledqs/BBox-Label-Tool.git

Store the data to be annotated in the “001” folder of the Images directory. You need to make some changes based on your images file extension in the main.py file. In lines 134, 152 of the main.py file I have changed it to ‘*.jpg’ as my images are with extension jpg.

Line 134 self.imageList = glob.glob(os.path.join(self.imageDir, '*.jpg'))Line 152 filelist = glob.glob(os.path.join(self.egDir, '*.jpg'))

Now run main.py file.

python main.py

In the GUI enter “001” as the Img Dir and load the dataset from “001” folder and start annotating.

You can even annotate two bounding boxes in the same image.

The bounding box coordinates are stored in .txt files in the “001” folder of “Labels” directory.

# For the first image
1
587 169 609 180
# For the second image
2
516 397 563 430
72 414 116 434

The first line in the .txt file indicates number of bounding boxes and the following lines indicate the bounding box coordinates.

Data preparation as needed by YOLO

The format in the above files is:

|category number| |bounding box left X| |bounding box top Y| |bounding box right X| |bounding box bottom Y|

But YOLOv2 needs files in the following format:

|category number| |object center in X| |object center in Y| |object width in X| |object width in Y|

Run following python script on .txt files to change to YOLOv2 format.

import os
from os import walk
from PIL import Image
from shutil import copyfile
# Function to convert to YOLO formatdef convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = (box[0] + box[1])/2.0
y = (box[2] + box[3])/2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
inputpath = "yolo_labels/"
outputpath = "yolo_labels_new/"
cls_id = 0# Get files in the inputpathtxt_name_list = []
for (dirpath, dirnames, filenames) in walk(mypath):
txt_name_list.extend(filenames)
break
for txt_name in txt_name_list:
txt_path = mypath + txt_name
txt_file = open(txt_path, "r")

lines = txt_file.read().split('\n')

t = int(lines[0]) # t contains how many bounding boxes in an image

if len(lines) != 2:

""" Open output text files """
txt_outpath = outpath + txt_name
print("Output:" + txt_outpath)
txt_outfile = open(txt_outpath, "w")

for i in range(1,t+1):
line = lines[i].split(" ")
xmin = line[0]
xmax = line[2]
ymin = line[1]
ymax = line[3]

img_path = "yolo_data/" + "yolo_images/" + os.path.splitext(txt_name)[0] + ".jpg"
im=Image.open(img_path)
w= int(im.size[0])
h= int(im.size[1])
#print(w, h)
b = (float(xmin), float(xmax), float(ymin), float(ymax))
bb = convert((w,h), b)
#print(bb)
txt_outfile.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + "\n")
txt_outfile.close()
dst_path = "yolo_images_new/" + os.path.splitext(txt_name)[0] + ".jpg"
copyfile(img_path, dst_path )

Now the .txt file will change to following format.

0 0.274147727273 0.684027777778 0.0681818181818 0.0347222222222

If there are two bounding boxes in an image, it will be in the following format.

0 0.324573863636 0.747395833333 0.0610795454545 0.03645833333330 
0 0.700994318182 0.603298611111 0.078125 0.0329861111111

Now make a directory with both images and labels(.txt). It should look in the following way.

Create an “obj” folder in the “data” directory of the darknet and store these images and labels in the “obj” folder.

Now, we need to tell YOLOv2 which images constitute training set, and which will images will constitute as test set. The percentage of images to be used for test set can be changed with percentage_test. Following is the script to create train.txt and test.txt files.

import glob, os# directory obj is in data folder of darknet which contains both images and labels
current_dir = "obj"
# Directory where the data will reside, relative to 'darknet.exe'
path_data = 'data/obj/'
# Percentage of images to be used for the test set
percentage_test = 10;
# Create and/or truncate train.txt and test.txt
file_train = open('train.txt', 'w')
file_test = open('test.txt', 'w')
# Populate train.txt and test.txt
counter = 1
index_test = round(100 / percentage_test)
for pathAndFilename in glob.iglob(os.path.join(current_dir, "*.jpg")):
title, ext = os.path.splitext(os.path.basename(pathAndFilename))
if counter == index_test:
counter = 1
file_test.write(path_data + title + '.jpg' + "\n")
else:
file_train.write(path_data + title + '.jpg' + "\n")
counter = counter + 1

train.txt and test.txt files should look in the following manner.

train.txt

Configuration files preparation

You need to create following three files in the “cfg” directory.

  1. obj.data
  2. obj.names
  3. yolo-voc.2.0.cfg

obj.data

classes= 1  
train = /media/mpl1/mpl_hd1/yolo/yolov2/darknet/cfg/train.txt
valid = /media/mpl1/mpl_hd1/yolo/yolov2/darknet/cfg/test.txt
names = obj.names
backup = backup/

Note that you need to give total path for train and test files.

obj.names

PLATE

yolo-voc.2.0.cfg

Change filters = 30 in 224 line and classes = 1 in 230 line of yolo-voc.2.0.cfg in the cfg folder of the darknet.

Formula for filters = (classes + 5)*5, classes = 1 in my case so filters = 30.

Training

Before starting training, you need to download the pre-trained weights from here.

Time to start training on your dataset. My training dataset size is 1637 images.

darknet.exe detector train cfg/obj.data cfg/yolo-voc.2.0.cfg darknet19_448.conv.23

Testing

Once training is done the weights are stored in the backup folder of the darknet. We will use “yolo-obj_final.weights” to test on the image.

darknet.exe detector test cfg/obj.data cfg/yolo-voc.2.0.cfg yolo-obj_final.weights test_image.jpg

Result on live video

References:

https://arxiv.org/abs/1612.08242

--

--