Deploying AI at the Edge

Silviu Tudor Serban
Intel Software Innovators
2 min readMar 11, 2019

AI is (becoming) omnipresent in everyday life.

From object recognition in images to natural language processing and speech recognition, AI is literally in our pockets, homes and vehicles.

AI algorithms require extensive data and processing power for best performance, a motivating factor for most AI-enabled products and services to be primarily cloud-based. While the cloud has many advances, such as simplified management and scalable computing assets, it is not always feasible for deploying AI solutions, especially in the following circumstances:

The application requires low latency responses

The connection to the cloud service is not guaranteed

The volume of (sensor) data is large

The process of sending data to the cloud, having the data processed and getting a response, doesn’t usually take long, but to make a point think about all the caveats of running a system like Autopilot in the cloud.

We are fortunate that’s not the case, and given the continued increase of computing power in edge devices, deploying AI directly on the edge devices is becoming a viable architectural decision.

A while ago, I got my hands on the Intel Movidius Neural Compute Stick, a device for developing and deploying deep learning algorithms at the edge. It is my belief that hardware such as the NCS will open the door to a novel type of partially or completely cloud-independent AI solutions and products.

Make sure to read our Developing a productivity enhancement tool powered by on-device AI article for a practical use-case and development guide. Furthermore, you can use the following link to get started with the Intel Neural Compute Stick Technology: https://software.intel.com/en-us/neural-compute-stick.

--

--