Detecting Chess Pieces Using Tensorflow Object Detection API

Viren Dhanwani
The Startup
Published in
3 min readAug 18, 2020

I recently subscribed to a weekly data science newsletter that sent a new dataset to work on every Monday. Working as a web developer, I get little time to practice and learn all the data related stuff. Now I had a dataset in my mail on every Monday with a very high usability value and focused on a particular topic.

One of those topics was detecting chess pieces from the given dataset. So I decided to learn and use the Tensorflow Object Detection API.

Tensorflow GitHub repo contains all the scripts and provides a model zoo to select a model as per need. I selected the Faster RCNN Inception v2 model. In the given dataset, images were already divided into testing and training folders.

Now, the first step of the process is to label or annotate an image. This can be achieved through labelImg or some other similar tool(https://github.com/tzutalin/labelImg). This will get you XML files for all the images. Then a script is needed to be executed to convert all these files to a single CSV file.

Luckily and thankfully for me, this dataset already contained annotations.csv, so I was saved from this time consuming and boring job of labelling images.

Second, I generated .tfrecord file for both testing and training data. After that I created the .pbtxt file which contains all the classes that I needed to detect.

item {
id: 1
name: 'whitequeen'
}
item {
id: 2
name: 'whiteking'
}item {
id: 3
name: 'whiteknight'
}item {
id: 4
name: 'whitepawn'
}item {
id: 5
name: 'whitebishop'
}item {
id: 6
name: 'whiterook'
}
item {
id: 7
name: 'blackqueen'
}
item {
id: 8
name: 'blackking'
}item {
id: 9
name: 'blackknight'
}item {
id: 10
name: 'blackpawn'
}item {
id: 11
name: 'blackbishop'
}item {
id: 12
name: 'blackrook'
}

Lastly, I changed the config file of my model by including the pbtxt and tfrecords files I had created and also mentioned the number of the classes to detect that is 12.

Now, we train!

I ran the training for 10000 epochs in Google colab with GPU runtime and loss for my last step was 0.14513482, but overall loss was 0.417, which is okay-ish. Before testing the inference graph that was created needs to be exported. After exporting, one receives a frozen inference graph which is used to make predictions, and several checkpoints of the model are also given.

Test results time!!

As you can see its a mixed bag. In some cases where there are multiple objects in the picture, the model identifies most of them correctly, but when there is only a single object in picture, it doesn’t always identifies.

Finally, credits to Luke Posey for the dataset and Sentdex for showing the process.

Link to Sentdex’s tutorial — https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/

Link to my Github repository — https://github.com/virendhanwani/chess-object-detection

Newsletter — Dataset Daily by Luke Posey

Other Resources:

Thanks for reading!

--

--