Lego Sorter using TensorFlow on Raspberry Pi

Paco Garcia
4 min readSep 2, 2018

--

Neural Network based Lego sorter

In early 2017 I stumbled across one of the documented Google Cloud and Tensorflow use cases titled How a Japanese cucumber farmer is using deep learning and TensorFlow. It stuck in my mind and sparked my curiosity.

It was around December 2017 that I decided to do something about it and started thinking hard on how to do it myself. I brought it up in conversation with friends, and I started to get a lot of good guidance and advice. Two of them were instrumental to achieve the goal:

“…the software is easy, but getting to a reliable mechanical design will be the hard part…”

“…get a 3D printer, it will make your life easier…”

I started from zero, with no knowledge about any of the technologies I would use beside Python and it roughly took me 200 hours spread across 6 months.

The Goal

I defined my objective as: Build a machine that can reliably sort between 10–20 types of Lego bricks reliably without manual feeding and using an image-based neural network classification model.

I was confident I could take the Google Cloud use case and replicated it. As it’s true in any technology endeavor along the way I identified a lean path to the objective and made some trade-offs to achieve a reasonable ‘time to market’.

5 Parts to This Blog

This part of a 5 Blog Series to cover the mechanical and software design for the Lego Sorter, as well as sharing the training set and some evaluation sets:

1. Lego Sorter using TensorFlow on Raspberry

2. Mechanical Separation (Design, Motors and Sensors)

3. Overview of the Software stack

4. Using Inception V3 to Identify LEGO vs. Generic Bricks

5. Try It Yourself: 2 Big Data Sets so you can Replicate this Project

The Machine in Action

Lego Sorter with Tensorflow on Raspberry Pi

Overview of the Design

I will provide more details in the Mechanical and Software blogs, but at a high level, this is how I designed the separator:

I’m using a Motor and Servo HAT, as well as a custom board to control the IR Beam Sensors and backlight LEDs. I’m using GPIO and PWM signals in Python to control the movement of the entire machine and using image recognition using OpenCV to detect any shortcomings in the mechanical separation (e.g. two Lego pieces in a single image).

Training Set and Classifier Approach

I used a retrained Inception V3 model to classify the 11 brick classes. I ran the training on a GPU TensorFlow library that leverages my desktop’s CUDA enabled NVIDIA GPU.

Results for First Run

Disclaimer: Below is the results of the first run and they are quite exceptional. I do believe there will be significant variation across runs and I expect the yield to fluctuate in the 75–85% range.

How does this first run compare to the Cucumber Farmer?

My initial run was highly accurate in terms of mechanical and classifier accuracy, but I did see the same drop as mentioned in the article when you go from the trained accuracy output to the real-world implementation.

I came very close to replicate the case with the key differences being:

Automatic Feeder and Separation: Having a automatic feeder and separation mechanism automated the capture of the training set, which provided a material time saving.

Training Set and Camera: My setup has a single camera and a training set 3 times smaller.

Overall, this is how the scorecard came out:

Note: This is based on the published article, let me know if I’ve missed any important details on the Google Cloud/Tensorflow use case.

Special Thanks

This would not have been possible without the great help of these fantastic companies, individuals and organizations:

Google Codelabs

PyImageSearch

Adobe Thinkercad

Raspberry Pi Foundation

Tensorflow

NVIDIA CUDA

ImageNet

OpenCV

Python Software Foundation

--

--