Any Embedded Developer Can Create AI/ML Systems

Clive "Max" Maxfield
Supplyframe
Published in
5 min readFeb 25, 2020
Design flow with NanoEdge AI Studio (Image source: Cartesiam.ai)

The Internet of Things (IoT) is getting more and more ubiquitous. These days, we are surrounded by IoT devices. The problem is that most of these devices are pretty “dumb” in the scheme of things. What we need to do is to give our devices some “smarts” in the form of artificial intelligence (AI) and machine learning (ML) capabilities (see also What the FAQ are AI, ANNs, ML, DL, and DNNs?).

The Problem(s)

Let me give you an example. Suppose you are in charge of a factory that contains lots and lots of old machines, including motors, generators, pumps… you name it and they are out there. There are trillions of dollars of such legacy equipment working away in factories in the USA alone. Such equipment may have rudimentary monitoring and control capabilities, like a temperature sensor and a controller that will turn the machine off if the temperature exceeds a certain specified value, but — in the vast majority of cases — that’s about as sophisticated as it gets.

Now, let’s suppose that the machine eventually overheats and automatically shuts down in the middle of the night. Not surprisingly, this has the potential to negatively impact other activities in the factory, ultimately delaying shipments and costing money. If only there had been some way to know that things were starting to go downhill hours, days, or even weeks before, in which case it would have been possible to schedule any necessary maintenance in such a way as to prevent disruption to the factory’s operations.

The answer is to create an embedded system with AI/ML capabilities that can be attached to the machine and use sensors to monitor things like temperature and vibration. Furthermore, such a system should be able to detect anomalies in the early stages of things starting to go wrong, to make predictions about future events, and to alert its human supervisors as to what’s going on.

Like many things, this all sounds wonderful if you speak quickly, wave your hands around a lot, and try to avoid talking about the obvious problems. One issue is that AI/ML applications are typically trained in the cloud using a qualified data set, which is itself time-consuming and expensive to create. A bigger problem is that most embedded systems designers don’t have a clue about creating AI/ML applications.

If you perform a Google search to see how many IoT devices there are in the world right now, you will see estimates that range in the tens of billions. The estimates rise in the coming years, which means we are talking about one heck of a lot of devices.

At the time of this writing, only about 0.3% of these devices are augmented with AI/ML capabilities. This number is expected to rise to around 25% by 2025. This all sounds rather exciting, but there is a small fly in the soup or a large elephant in the room depending on your metaphor of choice. The problem is that we don’t have enough AI/ML developers to create all these wonderful applications.

According to the IBM Quant Crunch Report, the shortfall of data scientists is currently running at 130,000 in the USA alone. Meanwhile, according to the IDC, there are around 22 million software developers in the world. Of these, approximately 1.2 million focus on embedded systems and, of these, only around 0.2% have even minimal AI/ML skills.

Just to throw out one more interesting statistic, Statista predicts that 28 billion microcontrollers will be deployed in 2020, which equates to more than 885 each second.

Let’s take a moment to think about this. We have billions of micro-controllers being deployed in billions of IoT devices. We have millions of software developers, of which around 1.2 million focus their attention on developing embedded systems, the majority of which are based on the aforementioned micro-controllers. We have a desperate desire to imbue our “dumb” IoT devices with AI/ML-based “smarts.” But only around 0.2% of our embedded systems developers have even minimal AI/ML skills.

If only there were a way for existing developers of microcontroller-based embedded systems to add AI/ML capabilities to their systems without having to learn anything about AI/ML and without having to create the data sets required to train their AI/ML applications.

The Solution(s)

Let’s look at one possible solution, the NanoEdge AI Studio from Cartesiam. The way this works is that you launch NanoEdge AI Studio IDE on your PC running Windows 10 or Linux Ubuntu. Next, you select the processor core featured in your target microcontroller device. Currently, your options are the ARM Cortex M0, M0+, M3, M4, and M7, which have to be the most ubiquitous processors in the embedded market. This is also the time when you specify how much RAM you wish to devote to your AI/ML solution (NanoEdge AI Studio generates solutions that require only 4K to 16K of RAM, making them the most optimized AI/ML algorithms in the industry).

Next, you select the number and types of sensors you wish to use, such as temperature sensors, magnetic sensors, accelerometers, microphones (for sound recognition, not speech recognition), and so forth. Note that you don’t have to identify specific model numbers — just general types, like “one 3-axis accelerometer and three temperature sensors” (you also have to load samples of generic data associated with each sensor to give the system a contextual idea as to what it’s going to be dealing with).

This is one of the clever parts: The NanoEdge AI Studio IDE comes equipped with an extensive suite of AI/ML building blocks. Using the information you’ve provided (type of processor, amount of RAM, number and types of sensors, etc.), it will generate the best AI/ML solution out of 500 million possible combinations. The resulting solution, which is provided as a C library that is easily embeddable into the main micro-controller program, is subsequently compiled and downloaded into your embedded system.

This is another of the clever parts: You don’t need a training data set, because your AI/ML solution spends a week or so using its sensors to observe the machine working 24/7 and training itself. Once the solution has been trained, it can be left to its own devices (no pun intended), identifying patterns, detecting anomalies, predicting issues and outcomes, and alerting its human co-workers as to any potential problems.

I don’t know about you, but I think this is one of the most exciting AI/ML developments I’ve seen recently. A couple of years ago, the air conditioning unit at our house failed unexpectedly in the middle of the summer. We had a very uncomfortable week waiting for the HVAC guys to replace it with a new unit. I’m currently thinking about using NanoEdge AI Studio to create a simple application that I can attach to the air conditioning unit outside — possibly using a 3-axis accelerometer to monitor vibration — with a tricolor LED to inform me when things are good (green), hinting at going “wobbly” (orange), and starting to go “pear-shaped” (red). If I see anything other than a cheery green LED welcoming me when I return home, I’ll immediately get the HVAC guys to race around and sort things out before I end up with tears streaming down my face.

What do you think about all of this? Is this the development you’ve been waiting for to leap into creating AI/ML-enabled embedded systems with gusto and abandon?

--

--

Clive "Max" Maxfield
Supplyframe

Over the years, Max has designed everything from silicon chips to circuit boards and from brainwave amplifiers to Steampunk Prognostication Engines (don’t ask).