Facial Recognition on the new Google Coral Development Board — Part 1

How to get started with the Edge TPU

Pietra F T Madio
5 min readAug 7, 2019

As an AI Research Engineer for Embecosm I had the opportunity during the summer to start working full-time on my own AI-driven project.

Google’s Development Board. Image taken from: https://coral.withgoogle.com/docs/dev-board/get-started/

Artificial intelligence is typically associated with high power computing and big data, but there is an increasing need for development of AI systems for embedded applications. Embecosm has always been highly regarded for its specialisation in embedded devices. Therefore, we believe that applying this expertise to the growing field of AI will be highly beneficial to both communities.

Our idea

Me and my supervisor, Lewis Revill, decided to implement a facial recognition system on Google’s Edge TPU with the goal of exploring a combination of AI and embedded systems.

Google’s Edge TPU

I recently found out that Google released a development board with its main purpose being to prototype on-device machine learning models. As if that wasn’t interesting enough, the board is capable of high-speed machine learning inferencing. Its coprocessor is capable of performing 4 trillion operations per second (TOPS), at low energy (0.5 watts for each TOPS, being 2 TOPS per watt).

Google’s Development Board. Image taken from: https://coral.withgoogle.com/products/dev-board

More information about the board can be found here: [Dev Board | Coral]

Facial Recognition

The hardware design of the Edge TPU was developed to accelerate deep
feed-forward neural networks such as convolutional neural networks (CNN) as opposed to recurrent neural networks (RNNs) or Long Short-Term Memory models (LSTMs). Therefore, computer vision seemed like an ideal topic to explore.

Originally, we pondered over the idea of working with sound. However, for the reasons mentioned above, models that process sound are out of the Edge TPU’s capabilities as they normally rely on time based architectures such as RNNs and LSTMs.

The reason we chose facial recognition was that after going through the Edge TPU’s documentation, we noticed that while Google provided demos for classification and object detection models, there was a lack of facial recognition demos. Face recognition is closely related to image classification and object detection. Therefore, it seemed like a reasonable next step.

The process of setting up the Edge TPU

For the remainder of this blog post, I’ll be recounting the process of setting up the Coral board. To set it up we simply followed the tutorial from the Google Coral website, which will be described below. The full instructions can be found here: [Get Started | Coral]

The Edge TPU can be used from both Linux and Mac machines. All the commands below were ran from a Mac so they need to be modified to run on Linux. We used a USB-A to USB-microB cable, a USB-C to USB-C cable and a 2–3A (5V) USB Type-C power supply. Additionally, we needed to install serial console program such as screen (which luckily enough is available on Mac by default) and the latest fastboot tool.

Starting the board

To be able to flash the board, we had to install the udev driver as it is a requirement to communicate with the Dev Board over the serial console.
We then had to connect the USB-microB cable to connect the computer to the serial console. The LEDs will start to flash, leading us to the next step which is to run the command:

screen /dev/cu.SLAB_USBtoUART 115200

Now we just need to power the board. We had to plug in the 2–3A power cable to the USB-C port labeled “PWR” and then the USB-C cable to connect the computer to the USB-C data port labeled “OTG”.

Different ports on the Google’s Development Board. Image taken from: https://coral.withgoogle.com/docs/dev-board/get-started/

Now that the board is connected, we had to download and flash the system image. To do that we had to download it by running:

curl -O https://dl.google.com/coral/mendel/enterprise/mendel-enterprise-chef-13.zip

Then unzip and change directories to the new mendel-enterprise-chef-13 folder followed by:

bash flash.sh

When it’s done, the system reboots and the console prompts to login. The default login and passwords are the word mendel.

Connecting to the Internet

The board supports wireless connectivity, so it wasn’t too hard to connect it to the internet. We used the following commands to connect:

nmcli dev wifi connect <NETWORK_NAME> password <PASSWORD> ifname wlan0

and then verified it worked by running:

nmcli connection show

Mendel Software

To ensure that we had the latest software available, the packages were updated with the commands:

echo “deb https://packages.cloud.google.com/apt coral-edgetpu-stable main” | sudo tee /etc/apt/sources.list.d/coral-edgetpu.listsudo apt-get updatesudo apt-get dist-upgrade

To connect to the board through ssh, we installed the Mendel Development Tool (MDT) as well. We ran the following command on the host machine to set it up:

pip3 install --user mendel-development-tool

Now we can disconnect the microB-USB from the serial console and open a shell using MDT over the USB-C cable using mdt shell.

MDT will then generate an SSH public/private key pair that will be pushed to the board’s authorized_keys file, alowing us to now authenticate with SSH.

Demo

Finally we can test by running a demo model. We connected a monitor to the board through the HDMI port and executed it with the command:

 edgetpu_demo --device

The result is a recording of cars, giving an impression that nothing is really going on. However, the MobileNet model was executing in real time to detect each car.

The following page contains more detailed demos and more information about supported models: [Demos | Coral]

--

--