Computing at the Edge of IoT

Over the past year, we’ve had some great conversations with developers about building IoT devices with the Android Things platform. A common question that comes up is whether the platform is suitable for the Internet of Things (IoT) given that the hardware is much more powerful than the microcontrollers typically found in the space today. To answer that question, let’s examine how hardware choice and use case requirements factor into different IoT system architectures.

“Computer programmer's single microchip” by Brian Kostiuk on Unsplash

You already lost me, what’s a microcontroller?

Microcontrollers (MCUs) are simple, programmable, and fully integrated systems typically used for embedded control. In addition to a processor (CPU), they generally include all of the memory and peripheral interfaces necessary on a single chip. This simplicity and integration means that MCUs are relatively inexpensive and generally consume very little power. Many popular hardware development platforms, such as Arduino, are built on top of MCUs.

MCUs generally do not have the resources (such as a Memory Management Unit) to run a higher-level operating system like Linux or Android. Nor can they interface with high-speed peripherals like high-resolution cameras and displays. However, because the application code runs much closer “to the metal”, MCUs are very effective in real-time applications where timing is critical. Production MCUs often run some flavor of a real-time operating system (RTOS) to ensure tasks run in the exact amount of time required to ensure precise measurement and control.

All of these characteristics start to define applications where MCUs are a perfect fit…and where they aren’t.

The race to the cloud

Systems focused primarily (or entirely) on MCU hardware are based on what I’ll call the cloud first architecture. In this architecture, every edge device is connected directly to the internet (usually through WiFi), then provisioned and managed through a cloud service such as Google’s Cloud IoT Core. Inexpensive MCU platforms with built-in WiFi stacks, like the popular ESP8266, make designing systems like this very attractive.

In these systems, complex data analysis and decision making tasks are handled in the cloud back-end, while the device nodes perform data collection tasks or respond to remote control commands.

Cloud first architecture

Overall, this is a nice balance. Hardware is inexpensive to replace and can run on small batteries for multiple years, and heavy compute resources are provided by cloud services that are easy to scale up to meet demand as the number of edge devices increases.

MCU hardware and a cloud-first architecture perform well in applications where bandwidth and network latency are less of a concern. Data payloads are small and can be uploaded to the cloud in batches. Here are some examples:

  • Distributed sensor-based data collection
  • Mobile asset monitoring and tracking

Living on the edge

There is a trend in IoT systems moving towards edge computing, enabled by the smartphone economy driving down the cost (and power consumption) of more capable hardware. There are four main reasons why applications perform computing tasks at the edge:

  1. Privacy: Avoid sending all raw data to be stored and processed on cloud servers.
  2. Bandwidth: Reduce costs associated with transmitting all raw data to cloud services.
  3. Latency: Reaction time is critical and cannot be dependent on a cloud connection.
  4. Reliability: The ability to operate even when the cloud connection is interrupted.

A great example of this in practice is the Google Home device. While the Google Assistant functionality is cloud-driven, the hotword detection happens locally on the device. This protects user privacy by avoiding uploads of audio data until the device knows you are talking to it, but it also eliminates the bandwidth that would have been consumed uploading raw audio to the cloud. Without edge computing, a device like this would not be feasible for consumer use.

This is also true in industrial automation systems where latency and reliability are critical. In these systems, one or more intermediate gateway devices act as an interface between local edge devices (which may be MCU-powered) and any cloud services.

Edge computing pushes intelligence out of the central cloud

Devices running Android Things are well-suited to gateway applications because they have the computational horsepower to locally apply data transformations and automation rules, paired with the SDK support to easily integrate with the Google Cloud Platform APIs.

There is also an upside here for MCU-powered nodes. In areas where the lowest power consumption is still critical, WiFi and Ethernet can often strain (or break) the power budget. If we remove the need to connect each device to a cloud service, we can substitute a more power-efficient local network transport, such as Bluetooth Low Energy or an 802.15.4 radio attached to a Thread mesh network.

Making IoT smarter

The advancements in both artificial intelligence (AI) and machine learning (ML) promise to have big effects on IoT systems in the coming years. The ability for ML algorithms to find patterns and make predictions based on data collected by devices will quickly become a necessary component to the success of IoT as the number of devices (and therefore the volume of data) continues to grow.

Systems built around a cloud-first architecture can already take advantage of services in Google’s Cloud AI suite, such as Cloud Vision and Cloud Speech, to enable machine learning. MCUs can interface with these services through REST APIs or indirectly via cloud functions and the bridges provided by Cloud IoT Core. The “heavy lifting”, however, must still be done in the cloud.

To truly scale these types of capabilities in IoT systems, we need to decentralize and push as much AI/ML as possible to the edge. This is where a platform like Android Things starts to really shine. In addition to having the necessary compute power, client libraries like the Google Assistant SDK and TensorFlow Lite enable edge devices to perform these complex tasks with very little developer integration work.

What’s next?

As I mentioned earlier, one of the advantages of MCUs is the ability to essentially control what happens on every clock cycle, providing real-time control over the I/O on the chip. This enables application firmware to implement custom protocols not directly supported by the hardware controllers on the processor.

We are starting to see chip vendors build these capabilities into their hardware, including both application processor (Cortex-A) and MCU (Cortex-M) cores within the same package. Architectures like this will enable developers to have a dedicated real-time core within the larger system, essentially providing the best of both worlds.

The right tools for the job

We’ve seen that demand for low latency, offline access, and enhanced machine learning capabilities is fueling a move towards decentralization with more powerful computing devices at the edge. Nevertheless, many distributed applications benefit more from a centralized architecture and the lowest cost hardware powered by MCUs. It all comes down to evaluating what your system’s needs truly are, and selecting the right tools for the job.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.