TFDevSummit2019: Coral Edge TPUs

Google’s new machine-learning-at-the-edge hardware

Ray Yamamoto Hilton
Eliiza-AI
2 min readMar 18, 2019

--

Google’s bespoke TPUs have been evolving quickly and provide a cost-effective performance boost to machine learning when running in the cloud.

Cloud TPU evolution, from left to right: v1 (2016) Integer only, v2 (2017) 180 TFLOPS per board, v3 (2018) >100 PFLOPS per board

Google are now providing a development board that is a full raspberry-pi-like system with an on-board “edge” TPU. This board, called “Coral”, allows an energy efficient acceleration of inference and some kind of training (e.g. transfer learning) on the device, in real time.

They had some demos running of doing image classification on live video feeds at over a hundred frames-per-second

Coral Edge TPU — complete system for doing inference and some kinds of training on-device.

Additionally, they also had a USB accelerator device that you can attach to your laptop or existing systems (such as Raspberry Pi) to accelerate inference. I can see this as being a great way to experiment with optimising for TPUs locally and for proof-of-concept projects.

We have one in the office, so will be exploring what we can do with it over the coming weeks. I’d be interested to hear what use-cases people see for this product — Please do let me know if there are things you’d like us to test out or you have some thoughts on how edge machine-learning could shift the way we approach projects.

--

--