Powering the Edge: Unleashing the potential of ML

Chris Barsolai
Intel Student Ambassadors
3 min readAug 6, 2019

Today billions of edge devices are being deployed in various applications, such as consumer, industrial, IoT, automotive, medical, drones and surveillance; thanks to ever-growing speeds, shrinking geometries, and ultra-low power semiconductor technologies and System-on-Chip (SoC) devices. Sensors at edge devices generate enormous amounts of data including images, video, speech and other non-imaging data, which need to be transmitted back to cloud. Even if there is abundant and reliable transmission channel bandwidth, the round trip delays in transmitting data to cloud and getting back commands to be executed at the edge device, is prohibitive in most of the real-time, latency-sensitive applications. Further, security and privacy are the biggest concerns in transferring user data from edge devices to the cloud. Hence, there is a huge demand in enabling intelligent decisions in next generation edge devices, either fully autonomous way or semi-autonomous way.

TL;DR: AI on MCUs enables cheaper, lower power and smaller edge devices. It reduces latency, conserves bandwidth, improve privacy and enables smarter applications.

The Enabling Technologies Behind Edge Computing and Machine Learning

Arguably, this type of computing division of labor could have been done in the past, but the critical elements that are enabling these applications now are new types of ML algorithms and specialized computing components, such as visual processing units (VPUs). These new types of specialized AI chips are starting to be introduced to the market by big tech industry vendors, as well as many smaller startups eager to make their mark on this new opportunity.

AI Edge Devices Comparison

Today, Cloud architectures lets you build and train ML models in the cloud, then run those models on the Edge devices through the power of the Edge hardware accelerator.

The ML Workflow

Credits: Crosser

The workflow for machine learning consists of two main steps, developing the model and executing the model. The first step is an off-line operation where stored data is used to train and tune a model. Once satisfactory results are achieved the trained model is deployed in an execution environment to make predictions based on real-time data. The edge is typically used only for executing the ML model. However, ML model development is an iterative process where the model may be optimized/improved over time, when more data becomes available or the architecture is refined. Hence you should expect the ML model in the edge to be updated several times during the life cycle.

Intel Movidius Neural Compute Stick

Intel recently announced the Movidius NCS2. At the core of the stick is the Myriad Vision Processing Unit (VPU), an AI-optimized chip for accelerating vision computing based on convolutional neural networks (CNN). According to Intel, Myriad VPUs have dedicated architecture for high-quality image processing, computer vision, and deep neural networks, making them suitable to drive the demanding mix of vision-centric tasks in modern smart devices.

Intel has added NCS 2 support in OpenVINO Toolkit, the software platform for optimizing ML models. The new toolkit is built for other hardware platforms including Intel Arria and FPGA runtime environments. You can read a comprehensive get started guide here: https://software.intel.com/en-us/articles/get-started-with-neural-compute-stick

Intel is highly committed to democratizing AI by providing these tools to developers interested in diving into development with AI. AI is still in its infancy, and as this space evolves, Intel will continue to advance disruptive approaches in compute that support the complex workloads of today and tomorrow.

If this post was helpful, please click the clap 👏 button below a few times to show your support! ⬇⬇

--

--

Chris Barsolai
Intel Student Ambassadors

Intel AI Ambassador • Organizer, Nairobi AI • Program Assistant, ALC • All things Python • For the best of AI • Live and let live