Xcore.ai Brings New Developments for AI and ML on the Extreme Edge

Clive "Max" Maxfield
Supplyframe
Published in
3 min readMar 11, 2020
Block diagram of the xcore.ai (Image source: XMOS.com)

Do you recall my column TinyML Packs a Punch from a couple of weeks ago? This introduced the concept of deploying artificial intelligence (AI) and machine learning (ML) technologies on small, low-power processing engines like microcontrollers that are located at the extreme edge of the Internet — the point where the internet meets and greets the real world.

Well, since that column, it seems that rarely a day goes by without something related to TinyML passing before my eyes, such as the fact that Any Embedded Developer Can Create AI/ML Systems without actually having to know AI/MI. Even better, these systems will train themselves in the field (as opposed to you having to train them in the cloud), after which they will be able to identify patterns, detect anomalies, and predict issues and outcomes.

As part of all this, I’m currently devouring the book TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers by Pete Warden and Daniel Situnayake (I will be writing a review of this tome in a future column).

There’s so much exciting stuff going on that we are spoiled for choice, but one recent development really made me sit up and take notice…

An Interesting Little AI/ML Processor

I was chatting to the folks at XMOS a couple of weeks ago when they were preparing to unveil their new xcore.ai processor at the recent tinyML Summit (12–13 February 2020). In fact, this is the third generation of their technology. The first generation was a tasty little number focused on control and I/O functionality (you’ll see what I mean in a moment); the second generation added luscious amounts of digital signal processing (DSP) functionality, which facilitates applications like the ability to listen to a sound space with multiple people talking and disassemble it into the individual voices in real-time; and the third adds a smorgasbord of AI/ML-related capabilities.

The easiest way to wrap your brain around the xcore.ai is to look at its block diagram as illustrated above. At the top, we see up to 128 I/O pins and hardware ports that can be programmed in software to implement a wide variety of communications protocols and peripheral functions. At the bottom, we see two hard PHY (physical layer) functions for implementing USB and MIPI interfaces, if required.

On either side are two tiles, which are connected by a high-speed switch. Each tile boasts a RISC processor with a tightly coupled SRAM and an integer and floating-point vector processing unit. Execution is split over eight logical cores — i.e., concurrent hardware threads — each of which can run software tasks that execute I/O, control, DSP, and AI processing, where each task can communicate with the other tasks.

When you are creating an application (the device is fully programmable in C, with specific features for DSP, AI, and ML accessible via optimized C libraries), one or more logical cores can be used to implement interface functions (e.g., I2C, SPI, USB, MIPI), another group of logical cores can be used to perform far-field microphone and sensor data processing, and yet another group of cores can be used to implement a neural network model to perform AI/ML tasks.

With a price tag of only around $1, I can easily see xcore.ai chips being deployed in almost anything, from light switches to electric toasters to smoke detectors to smart streetlights to… as usual, we are limited only by our imaginations.

--

--

Clive "Max" Maxfield
Supplyframe

Over the years, Max has designed everything from silicon chips to circuit boards and from brainwave amplifiers to Steampunk Prognostication Engines (don’t ask).