Hands on with the Coral Dev Board

Getting started with Google’s new Edge TPU hardware

Alasdair Allan
Mar 26, 2019 · 20 min read

I also go “Hands on with the Coral USB Accelerator” in a companion article.

At last year’s Google Next conference in San Francisco Google announced two new upcoming hardware products both built around Google’s Edge TPU, their purpose-built ASIC designed to run machine learning inferencing at the edge.

Both a development board and a USB Accelerator, with a form-factor along similar lines to the Intel’s Neural Compute Stick, were announced allowing users to run inferences of pre-trained TensorFlow Lite models locally on their own hardware.

Image for post
Image for post

The hardware is now available and has now launched into public Beta under the name Coral and, ahead of the launch, I managed to get my hands onto some early access hardware. While I go hands on with the USB Accelerator elsewhere, here I going to look at the Coral Dev Board.

Opening the box

The Coral Dev Board comes in a small rather unprepossessing box, which is not a bad thing. Not everything has to come in pretty Apple-like boxes, and these days it can be a bit of a red flag when it comes to manufacturing.

Image for post
Image for post
Image for post
Image for post

Inside the box is the Coral Dev Board itself. It comes fully assembled, so you won’t have to try and attach the SoM that houses the NXP i.MX 8M processor, and the Google Edge TPU, or the heat sink and cooling fan assembly.

Image for post
Image for post

Unfortunately, not everything you need to get started with the Dev Board comes in the box. Before you get going, you’re going to need some supplies.

Gathering the supplies

As the Coral Dev Board arrives without a system image, with only the U-Boot boot loader present, you’re going to need is a computer running Linux to flash a system image on to the board. You’ll also need a few cables.

You’re going to need a USB-A to micro USB cable to connect your computer to the Dev Board’s serial port, a USB-A to USB-C cable to connect your computer to the Dev Board’s data port, and a USB-C to USB-C cable to power the board.

Image for post
Image for post

While you might be able to get away with a ‘normal’ USB charger, with another USB-C to USB-A cable, to power the Coral Dev Board instead of a USB-C power supply my experience with the Raspberry Pi has taught me that most USB chargers aren’t rated, and will have some problems supplying, enough current to do that.

While the Raspberry Pi needs a 2.5A power supply the Coral Dev Board specifications say that it might need more, from 2 to 3A. So I’m more that somewhat wary about trying a substitution, as I’ve got a sneaking suspicion that most if not all old style USB chargers won’t be up to powering the Dev Board.

I also wouldn’t try powering the Dev Board from your computer, even if you have a new MacBook with the appropriate USB-C sockets, as the datasheet explicitly warns against it. Presumably for very good reasons.

If you don’t have a laptop or desktop running Linux to hand, you can also use a Raspberry Pi to flash to new firmware onto the Coral Dev Board instead of

Flashing the OS onto the development board

The first thing you’ll need to do to set up Coral Dev Board is check that the DIP switches on the board are in the correct state to let us flash it. Located to the right of the SoM, just beneath the 40-pin GPIO headers, are 4 DIP switches.

Image for post
Image for post

Before turning the power on to the board you should confirm that they’re in the correct position to boot the board using the onboard eMMC.

Image for post
Image for post

Now go ahead and open up a Terminal window on your laptop and, if you don’t already have it installed, go ahead and install the screen terminal program. We’ll need to use this talk, via USB serial, to the Dev Board during set up. We also need to install fastboot which we’ll use to flash the OS image onto the Dev Board.

$ sudo apt-get update
$ sudo apt-get install screen
$ sudo apt-get install fastboot

After doing this we need to add some additional udev rules so your laptop recognise the Coral Dev Board when we connect it.

$ sudo sh -c "echo 'SUBSYSTEM==\"usb\", ATTR{idVendor}==\"0525\", MODE=\"0664\", GROUP=\"plugdev\", TAG+=\"uaccess\"' >> /etc/udev/rules.d/65-edgetpu-board.rules"
$ sudo udevadm control --reload-rules && udevadm trigger

Now we can start connecting cables.

Take your USB-A to micro USB cable and plug the normal USB end into your laptop, and the plug the micro USB end into the Dev Board’s serial port, which is located to the right of the 40-pin GPIO header block.

Image for post
Image for post

Don’t plug the power cable into the Dev Board, we’re not quite there yet. However once you’ve plugged the serial cable in you should go ahead and check dmesg to determine the serial port we’ll use to talk to the Dev Board.

$ dmesg | grep ttyUSB
[ 2811.796427] usb 1-1.5: cp210x converter now attached to ttyUSB0
[ 2811.808785] usb 1-1.5: cp210x converter now attached to ttyUSB1

The first of the two ports is the one we need, so go ahead and open a serial connection to the Dev Board from your Raspberry Pi.

$ screen /dev/ttyUSB0 115200

If everything is working correctly your Terminal window should go blank. That’s not exactly unexpected, as the Dev Board isn’t powered on yet.

Image for post
Image for post

Grab your USB-C power supply, and your USB-C to USB-C cable, and plug the cable into the right-hand USB-C connector on the Coral Dev Board.

Image for post
Image for post

The red LED next to the power socket should turn on, and the fan on top of the heat sink will spin up. In your screen window you should see a load of messages flash by and you should be deposited at the U-Boot prompt.

Enter the following command at the U-Boot prompt,

# fastboot 0

and grab your USB-A to USB-C cable and connect your laptop to the Coral Dev Board. The data cable goes into the left of the two USB-C sockets, so the one to the left of the power cable.

Image for post
Image for post

Open up another Terminal window. You should now have two windows open, with the first sitting in screen connected to the Dev Board now waiting at the U-Boot prompt. In the new just opened window type the following command,

$ fastboot devices
110841d6f0609912 fastboot

to check that fastboot can see the Dev Board, the hex string on the left is the device ID of your board and will be different for your board.

If you don’t see anything, check that the Dev Board is in fastboot mode, and the data cable is connected. If you get a “no permissions” error verify that the udev rules file you created looks like this,

$ cat /etc/udev/rules.d/65-edgetpu-board.rules
SUBSYSTEM=="usb", ATTR{idVendor}=="0525", MODE="0664", GROUP="plugdev", TAG+="uaccess"

However if everything has gone well, you’re now in a position to flash the operating system onto the Dev Board. So go ahead and start downloading it to your laptop in the new window using wget. The zip file is approximately 1.4GB, so that might take a while depending on your connection.

$ wget https://dl.google.com/aiyprojects/mendel/enterprise/mendel-enterprise-beaker-18.zip

When it finally downloads, unzip the file, change directories, and start the flashing process using the flash.sh script.

$ unzip mendel-enterprise-beaker-18.zip
$ cd mendel-enterprise-beaker-18
$ bash flash.sh
target reported max download size of 419430400 bytes
sending 'bootloader0' (1006 KB)...
OKAY [ 0.048s]
finished. total time: 0.105s

Both Terminal windows should begin to fill with messages. If everything goes well, about 5 minutes later the window where you started the flash script should indicate that it has finished, and return you to the prompt, and the Coral Dev board will reboot into the operating system.

If you’ve wandered off to make yourself a cup of coffee a good sign that things have progressed is that, after rebooting to the operating system, the fan on top of the heatsink—which was running the whole time the Dev Board was in U-Boot mode—should now stop.

With the fan now stopped I was sort of curious as to what temperature that ridiculously oversized heatsink was going to reach. So I grabbed my laser infrared thermometer and checked.

Image for post
Image for post

With the fan spinning the heatsink was sitting around 30°C (86°F), but with the fan stopped the heatsink temperature rises to 50°C (122°F) with the board idle. It’s going to be interested to see how that changes when the board is running full tilt.

You can now login to the Coral Dev Board in the screen window, with the default username, mendel, and the default password which is also mendel.

Mendel GNU/Linux (beaker) xenial-calf ttymxc0
xenial-calf login: mendel

The board’s hostname is randomly generated the first time it boots, so don’t be surprised when it’s different than mine. You can change it using the hostname command, or if you like it, you can always decide to keep it.

Connecting the development board to your wireless network

Now were logged into the Dev Board we can connect it to our wireless network using the nmcli command,

$ nmcli dev wifi connect MY_SSID password MY_PASSWORD ifname wlan0
[ 3661.616148] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
Device ‘wlan0’ successfully activated with ‘80cd6a16–20b7–49bb-b479-c693dbe7b4ae’.

and, after activation, we can check the status of our connection.

$ nmcli connection show
MY_SSID 80cd6a16-20b7-49bb-b479-c693dbe7b4ae 802-11-wireless wlan0
aiy-usb0 b3328303-daee-48c6-a840-20dbd89fd99f 802-3-ethernet usb0

Wired connection 1 921ae885-37ab-3e0b-a04e-8a91f2ad3562 802-3-ethernet --

Your Coral Dev Board is now connected to your wireless network and, if we need to, we can find out the IP address that our router has allocated to the Dev Board as below.

$ ip addr | grep wlan0
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 3000
inet brd scope global dynamic wlan0

However the board should advertise itself using mDNS using its hostname. My board chose xenial-calf for its hostname when it was first booted, so opening up another Terminal window on your laptop I should now be able ping it on the LAN using its mDNS address.

$ ping xenial-calf
PING xenial-calf.home ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=64.466 ms
64 bytes from icmp_seq=1 ttl=64 time=307.115 ms
64 bytes from icmp_seq=2 ttl=64 time=314.602 ms
64 bytes from icmp_seq=3 ttl=64 time=128.294 ms
64 bytes from icmp_seq=4 ttl=64 time=45.199 ms
--- xenial-calf.home ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 45.199/171.935/314.602/116.742 ms

At this point your laptop has done it’s job.

Image for post
Image for post

You can now logout of the Dev Board and kill the screen session using Ctrl-K and then A, go ahead and unplug the serial and data cables from the Coral board, and then shutdown your laptop. You won’t been needing the it any more.

Update: Since this article was written there have been some updates to the Coral Dev Board operating system, including increased security for SSH authentication. You won’t be able to immediately SSH in to the Dev Board using the “mendel” user because password authentication is now disabled by default. You must now first transfer an SSH key onto the board using the new Mendel Development Tool.

Running your first Machine Learning model

Go ahead and SSH into the Coral Dev Board,

$ ssh mendel@xenial-calf
mendel@xenial-calf's password:
Linux xenial-calf 4.9.51-imx #1 SMP Thu Jan 31 01:58:26 UTC 2019
The programs included with the Mendel GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright.
Mendel GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Mar 7 14:54:38 2019 from

and then start your first demo which will illustrate the sort of performance boost we can see when we’re offloading inferencing onto the Edge TPU.

$ edgetpu_demo --stream
Press 'q' to quit.
Press 'n' to switch between models.
INFO:edgetpuvision.streaming.server:Listening on ports tcp: 4665, web: 4664, annexb: 4666

This will start a web server with a video stream of freeway traffic with real time inferencing done on the board overlaid on top. Go ahead and open up a browser tab on your laptop, and navigate to http://hostname:4664, where the hostname is whatever name your board chose on first boot, mine picked xenial-calf for its hostname, so I went to http://xenial-calf:4664.

You can toggle between inferencing done with the Edge TPU enabled, and inferencing done using just the board’s CPU, by pushing then key in the Terminal window where you ran the demonstration app.

The difference between the inferencing speed is actually astonishing. With the Edge TPU enabled inferencing on the video stream—detecting cars in the stream of traffic—happens at 70 fps or more. However when you disable the Edge TPU ,and rely on the board’s CPU, inferencing speeds drops way down to only 2 or 3 fps. Which is pretty astounding.

While the demonstration app is running the fan on top of the heatsink will periodically spin up. But it looks like it and that ridiculously large heatsink keep everything in check, I didn’t measure any temperature creep above the 50°C (122°F) measurement we took when the board was sitting idle. In fact it looks like the fan might be over-specified, when it spins up the temperature of the heatsink drops back down around 35°C (95°F).

Getting started with the Python API for the Edge TPU

The demo we just ran was built using the Edge TPU Python module that provides simple APIs that perform image classification, object detection, and weight imprinting—otherwise know as transfer learning—on the Edge TPU.

Let’s take a look at the object detection demonstration code. You can find the demo code in the /usr/lib/python3/dist-packages/edgetpu/demo directory.

This script is designed to perform object recognition on an image. I’ve actually gone ahead and slightly modified original version of the demonstration code distributed with the Coral Dev Board. I’ve added some code to make the boxes drawn around detected objects a bit thicker, so they’re more easily seen, and added labels to each detection box. I’ve also just dropped any detected objects if the detection score is less than 0.75 certainty.

You can either grab my version of the code from GitHub, or use the version included with the board at /usr/lib/python3/dist-packages/edgetpu/demo.

Here I’m running my version of the script, which resides in the mendel user’s home directory, on an image of some fruit, also in the home directory.

$ cd /usr/lib/python3/dist-packages/edgetpu
python3 ./object_detection.py --model /usr/lib/python3/dist-packages/edgetpu/test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite --label /usr/lib/python3/dist-packages/edgetpu/test_data/coco_labels.txt --input fruit.jpg --output out.jpg
banana score = 0.964844
apple score = 0.789062
Please check out.jpg

You can copy the output image off the board to your laptop using the scp command. On your laptop type,

$ scp mendel@xenial-calf:out.jpg .
mendel@xenial-calf's password:
out.jpg 100% 1249KB 9.9MB/s 00:00

replacing xenial-calf with your own board’s hostname, to transfer the file across to your laptop.

I was obviously eating fairly healthily today as my lunch contained both an apple and a banana, both of which were detected in this image. All things considered, I guess it’s also a good thing I’m based in Europe, and don’t use leaded solder any more?

Image for post
Image for post
Image for post
Image for post

However if we now go ahead and turn our detection threshold all the way down we do get a lot more objects detected. However, most of these have very low certainty scores. They aren’t really credible detections.

banana score =  0.964844
apple score = 0.789062

banana score = 0.339844
person score = 0.210938
dining table score = 0.160156
person score = 0.121094
person score = 0.0898438
person score = 0.0898438
skateboard score = 0.0898438
banana score = 0.0898438

You can see here that we get multiple detections of the banana with different bounding boxes, but also some other detections. Interestingly, looking at the size and shape of the bounding boxes, at least one of the additional banana detections—at a certainty of 0.33'ish—is the banana shaped gap between the banana and the apple. Which is sort of interesting as it gives an insight into what the model looks for when it decides what is, and isn’t, a banana. The checkerboard pattern of the board in the background doesn’t affect its judgement on the banana-ness of the shape.

Image for post
Image for post
Image for post
Image for post

Beyond that the dining table sort of makes sense, as its seeing the green craft mat as a table, and I guess the Dev Board looks like a person to our model?

You should also bear in mind that the demonstration models included with the Coral hardware aren’t tuned. They are, in other words, not production-quality models. Detection accuracy is dependant on model training, and Google is expecting that users will train their own models to their own needs.

Now let’s take a look at the code. Stripping away the extraneous bits around our model that handles command line parameters, load the image, and handles annotating the result, the code that actually does the inferencing is actually just two lines long.

First of all we need to instantiate a detection engine with our trained model, where here args.model is the path to our chosen model passed on the command line.

engine = DetectionEngine(args.model)

Then we run the inference by pointing it at the input image, where hereimg is a PIL.Image object.

ans = engine.DetectWithImage(img, threshold=0.05, keep_aspect_ratio=True, relative_coord=False, top_k=10)

You can see here that we can actually adjust our credibility threshold, and the maximum number of candidate objects the engine should report above that threshold. So I could have filtered things here in the original call, rather than throwing in that if statement into the code, if I’d wanted to do that.

That’s it. That’s how easy it is to do object detection.

The DetectWithImage() call returns a list of DetectionCandidate objects which is a data structure of each candidate detections. Every object detected will have a corresponding label number returned by the model, which is why we need a label file so that we can translate the label number to something a bit more human friendly.

We were using a MobileNet SSD v2 model trained with the Common Objects in Context (COCO) dataset which detects the location of 90 types of object. So the label file for our model has a corresponding 90 objects in it, including our banana and apple.

0  person
1 bicycle
2 car
3 motorcycle
4 airplane
5 bus
6 train
7 truck
8 boat
51 banana
52 apple

87 teddy bear
88 hair drier
89 toothbrush

Alongside the label number the candidate detection will a certainty score, and a bounding box around the detected object which is passed as a numpy.array.

Performing image classification, as opposed to detection, is just as easy. You can find an example script—along with some test data that will you classify images of birds using a MobileNet V2 model trained to recognise 900 different types of bird using the iNaturalist bird dataset—in the same directory as the detection code, /usr/lib/python3/dist-packages/edgetpu/.

You can run it from the command line as follows,

$ cd /usr/lib/python3/dist-packages/edgetpu/
$ python3 demo/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --label test_data/inat_bird_labels.txt --image test_data/parrot.jpg
Ara macao (Scarlet Macaw)
Score : 0.613281
Platycercus elegans (Crimson Rosella)
Score : 0.152344

and here is the corresponding code, it’s shorter than the detection example.

But again if we strip away all the code used for parsing the command line, loading our image, and other setup tasks, we get down to only a couple of lines of code. Here,

engine = ClassificationEngine(args.model)for result in engine.ClassifyWithImage(img, top_k=3):
print (labels[result[0]])
print ('Score : ', result[1])

where we instantiate a detection engine with our trained model, again args.model is the path to our chosen model which we passed on the command line. Then we iterate through the results returned, with each result being a list of with the label identity and confidence score of the classification.

Adding a Camera

Accompanying the Coral Dev Board is an optional camera.

Image for post
Image for post

Built around an 5 mega-pixel (2582×1933 pixel) Omnivision OV5645 sensor, the camera is designed to connect to the board’s MIPI-CSI connector which located on the underside of the Dev Board. More details of the camera hardware can be found in the datasheet.

To attach the camera to the board, turn the camera module over so it’s face down and flip the small black latch on the white connector so it’s facing upward. Then slide the ribbon cable into the slot in to the connector with the blue strip is facing towards you. If the latch is pull all the way upwards the ribbon cable should slide smoothly beneath it, and you shouldn’t have to force it. Then push the black latch back down, in line with the connector, to secure the ribbon cable.

Image for post
Image for post

If the Dev Board is powered on and running, you’ll need to power down the board before attaching the camera module. In your SSH session you should go ahead and power down the Dev Board using the shutdown command to bring the board to a clean halt.

$ sudo shutdown -h now

Unplug the USB-C cable powering the board, flip it over, and follow the same procedure to as with the camera module inserting the ribbon cable with the contact pins facing toward the board, and the blue strip facing you.

Image for post
Image for post

Afterwards power the board back up and log back into via SSH.

Image for post
Image for post

There’s a convenient snapshot tool installed on the Dev Board that lets you test out the camera. Just go ahead and type,

$ snapshot --oneshot
Saving image: img0001.jpg

and then scp the image back to your laptop to check everything is working.

As well as the snapshot tool there’s a pre-canned demo you can try, but to try it out you’ll need to download some additional models.

Google have provided a number of pre-compiled models with corresponding label files that aren’t shipped with the board. You can use these as starting points, but if you’re considering commercial use you’ll need to retrain them.

For this demo go ahead and download the MobileNet V2 object classification model and associated label file.

$ cd ~
wget https://storage.googleapis.com/cloud-iot-edge-pretrained-models/canned_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite
$ wget http://storage.googleapis.com/cloud-iot-edge-pretrained-models/canned_models/imagenet_labels.txt

Then run the demo application,

$ edgetpu_classify_server --model ~/mobilenet_v2_1.0_224_quant_edgetpu.tflite --labels ~/imagenet_labels.txt

Similar to when we ran our initial demo with the board this will start a web server with a video stream, however this time the video stream will be from the camera module rather than being pre-canned.

Image for post
Image for post

Go ahead and open up a browser tab on your laptop, and navigate to http://hostname:4664, where the hostname is whatever name your board chose on first boot, as mine picked xenial-calf for its hostname, I went to http://xenial-calf:4664.

Now start waving a banana in front of the camera and see what happens.

Transfer Learning

The success of machine learning has relied heavily on the corpus of training data that companies — like Google — have managed to build up. For the most part these training datasets are the secret sauce, and closely held by the companies, and people, that have them. Although there are a number of open sourced collections of visual data to train object recognition algorithms, there are far fewer available speech data. Amongst one of the few available is the Open Speech Recording project from Google, and while they’ve made an initial dataset release, it’s still fairly limited.

In practice it’s never going to be feasible for most people to build the required large datasets, which is why people are looking seriously at transfer learning.

Google have provided some solid documentation on how to retrain an image classification model or an object detection model in a Docker container on your desktop machine.

However it’s the ability to retrain an image classification model on the device at near-realtime speed that’s probably going to interest most people. Making transfer learning available on device is a big step towards making standalone edge computing viable.

Building your own models

While Google’s precompiled models can actually take you a long way, eventually you’re going to want to train your own models.

You’ll then need to convert your TensorFlow model to the optimised FlatBuffer format to represent graphs used by TensorFlow Lite. From there you’ll need to compile your TensorFlow Lite model for compatibility with the Edge TPU with Google’s web compiler.

During the current beta period the compiler the Edge TPU compiler has some restrictions. But these restrictions should be lifted when Coral comes out of Beta testing next month.

Using a web compiler is a neat move by Google to get around a problem you face when working with Intel Movidius based hardware with an ARM-based board, like the Raspberry Pi, where you needed an additional x86 based development machine to compile your models so you can deploy them on to the accelerator hardware.

Right now, during the beta phase, the EdgeTPU web compiler is restricted to a few model architectures ; either a MobileNet V1/V2 model with a 224×224 max input size and a 1.0 max depth multiplier, an Inception V1/V2 model with a 224×224 fixed input size, or finally, an Inception V3/V4 model with a 224×224 fixed input size. All of these models must be a quantised TensorFlow Lite model (.tflite file) less than 100MB.

These architecture restrictions are going to be removed in a future update, with any quantised TensorFlow Lite model being allowed, so long as the model uses 1-, 2-, or 3-dimensional tensors with the tensor sizes and model parameters fixed at compile time. Although Google does warn that there may be “…other operation-specific limitations” that apply, those aren’t yet clear.

The restrictions to INT8 models and small cache sizes for the Coral hardware is pretty understandable, the board is designed for comparatively low power deployments. With required power consumption levels far less than some other hardware, for instance NVIDIA’s recently released Jetson Nano board, a direct comparison isn’t necessary very fair.


Google have provided some excellent overview documentation online to get your started working with the Dev Board and camera, alongside this is more detailed Python API documentation available to download.

The Coral Dev Board feels very different from Google’s previous machine learning kits that launched under the AIY Projects brand. While the Edge TPU hardware is certainly affordable enough to get traction in the maker market, it’s pretty evident that the new Edge TPU-based hardware is aimed at a more professional audience that the previous Raspberry Pi kits. With the Dev Board almost certainly intended as an evaluation board for the System-on-Module (SoM), which will be made available “in volume” later in the year, rather than a stand alone board intended for development.

Alongside the arrival of TensorFlow 2.0, as well as TensorFlow Lite for micro-controllers, the ecosystem around edge computing is starting to feel far more mature. But it’s the arrival of Edge TPU hardware makes idea of machine learning on the edge, and real time data interpretation, a lot more realistic.

This post is sponsored by Coral from Google.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store