Composing Deep-Learning Microservices for the Hybrid Internet of Things

Deep learning is steadily remaking every aspect of our world. But it can’t achieve that potential unless developers have the right tools for packaging up this machine intelligence for universal deployment.

Deep learning is coming into cloud data service developers’ working environments through incorporation into tools that support microservices architectures. What this refers to is an increasingly popular style of developing cloud applications as suites of modular, reusable, and narrowly scoped functions. In microservices environments, each function executes in its own container (such as Docker). In addition, each microservice interoperates in lightweight, loosely coupled fashion with other microservices via RESTful APIs.

Deep learning will increasingly ride on containerized microservices that execute in complex multiclouds. Typically, this happens through abstract “serverless” interfaces that enable microservices to execute transparently on back-end cloud infrastructures without developers needing to know anything about where or how the IT resources are being provided from. Consequently, the serverless back-end enables the infrastructure to automatically provision the requisite compute power, storage, bandwidth, and other distributed resources to microservices at run time.

In shifting to this mode of composing applications, deep-learning developers need not worry about things like pre-provisioning infrastructure, such as servers or operations. Instead, they can simply focus on modeling and coding. But for it all to interoperate seamlessly in complex deep-learning applications, there needs to be a back-end middleware fabric for reliable messaging, transactional rollback and long-running orchestration capabilities (such as provided by Kubernetes).

When developers construct deep-learning microservices for execution in the Internet of Things (IoT), the back-end interoperability fabric becomes even more complex than in most clouds. That’s because deep learning is becoming an embedded capability of all IoT endpoints, as well as a service provided to applications from IoT hubs and from the centralized cloud services. Just as the deep-learning services themselves will be “micro” in the sense of being narrowly scoped, the IoT endpoints where the algorithms execute will themselves will be increasingly micro — or rather, nano — in their resource constraints.

In the IoT, embedded deep-learning microservices will process the rich streams of real-time sensor data captured by endpoint devices. These microservices will drive the video recognition, motion detection, natural-language processing, clickstream processing, and other pattern-sensing applications upon which IoT apps depend. In this way, every object of any sort will be imbued with continuous data-driven intelligence, environmental awareness, and situational autonomy.

For composition and deployment of IoT deep-learning apps, developers require a middleware back-end that distributes microservices for execution at network endpoints. To support these IoT use cases, microservices architectures will evolve to support the radically distributed edge-oriented cloud architecture known as “fog computing.”

In this new paradigm, developers compose deep learning and other distributed capabilities using microservices APIs and serverless back-ends that transparently distribute, parallelize, and optimize out to the fog’s myriad endpoints. One way to illustrate the layering of a fog architecture is as follows, but keeping in mind that containerized microservices enable the “deep (learning) analytics zone” to pervade the entire cloud, all the way down to smart devices, sensors, and gateways.

Fog computing architecture

The figure below illustrates how containerization is supported within the Application Services Layer of the OpenFog Consortium’s Reference Architecture. Essentially, deep learning apps and other microservices would run inside whatever containerized environments, such as Docker/Kubernetes, are an IoT fog’s software backplane. In so doing, these containerized deep-learning microservices would tap into IoT/fog support layer services (such as databases and middleware) that would run as microservices inside their own containers.

Containerization for Application Support in the IoT Fog (source: OpenFog Consortium “OpenFog Reference Architecture for Fog Computing”)

In the broader perspective, the teams that make sure that all these deep learning microservices hang together will have data science as their core skillset. These professionals’ core jobs will be to build, train, and optimize the convolutional, recurrent, and other deep neural net algorithms upon which this technological wizardry depends. They will use tools such as IBM Bluemix OpenWhisk to build event-based deep-learning microservices for deployment on IoT fog/clouds that may be entirely private, entirely public, or spanning private and public segments in hybrid architectures.

Bluemix OpenWhisk is available today on IBM Bluemix and the open source community can be found here. To support agile composition of deep-learning apps for the hybrid IoT, OpenWhisk provides built-in chaining, which enables teams to individually develop and customize deep-learning and other microservices as small pieces of code that can be connected together in orchestrated sequences. It provides comprehensive, flexible support for serverless deployment, cognitive algorithms, containers, and languages for the IoT and other use cases.

Here’s a high-level functional architecture of OpenWhisk:

OpenWhisk provides a distributed compute service to execute application logic in response to events.

For further information on the underlying OpenWhisk architecture and source code, start here and proceed to the project’s GitHub page.