Decoupling Computation from Data Transformation
The Computes Mesh allows for some new ways of delivering applications. Computes helps you take advantage of idle CPU/GPU power you have sitting around in your network. In a traditional API setup, computations and data transformation are linked, usually deployed as an application running on a web server. Computes can help you decouple your computation from your API and provides new opportunities for data transformation.
I’d like to take you on a journey of …
Prologue
Imagine a network of security cameras with some dual-core processors on-board. The cameras are tasked with object detection, recognition, and alerting. Instead of feeding the raw data back to an API, a series of algorithms could be chained together using Computes tasks to perform increasingly complex data transformations. A basic object detection algorithm would be deployed to the camera itself. When activity occurs, the camera would add a task to Computes, that task would include the output of the object detection algorithm as its input. A chain of tasks using increasing levels of computational power could be kicked off, distributed among currently inactive cameras, or other computational resources if the demand is higher than the cameras themselves can provide. The results of all this activity could be fed into another chain of events that could be used to improve the algorithms involved, and a new version of the algorithm could be instantly deployed to the computes network.
A Serious Example (well, not quite)
Let’s walk through step-by-step. Imagine we are looking for a suspect, approximately 1.65m tall, male, wearing an orange jacket who has been reported in the area.
- Camera detects activity, the basic classification algorithm adds metadata about what type of objects can be seen in the video stream.
- A task is enqueued in Computes to check the current watch list (human, male, 1.65m). All active cameras run this task to see if we have a match.
- Imagine our cameras can’t (yet) properly analyze clothing, a series of small tasks could be fired in parallel to apply a set of general-purpose clothing detection algorithms on a few key frames of video. These tasks are run by the inactive cameras around the perimeter. If the outcome of those algorithms detect the orange jacket, we can notify the proper authorities.
- Found him!
Beyond the Edge
While the notification task is running, another series of more advanced tasks could be taking place, finding the point of origin of this individual. More processing power than the cameras have to offer are required. Computes tasks can contain constraints that define certain requirements, let’s imagine in this case a higher level of CPU and a GPU is required to run the appropriate video tracking algorithm. The idle CPUs from management’s powerful, yet underutilized desktops would be an ideal target.
- Suspect identified, while notifications are taking place, we start to track the origin point of the suspect.
- Many frames of video can be analyzed at once using parallel tasks, distributed to management desktops.
- The video is traced back to the first appearance of the suspect.
- This vehicle is probably involved somehow, let’s update the authorities and figure out where this vehicle came from.
Let’s run a whole new series of tasks identifying this vehicle and see if we can get a license plate.
Huh, that’s quite unusual. Our algorithms have never seen anything like this before. Let’s flag that for further processing to better train our algorithm in the future.
Bursting With Possibilities
Let’s also check all the footage to see if we can get a license plate. These algorithms require a lot more processing power than we have on site, so the tasks are flagged to be taken on by our data center running in AWS. Noticing more work than we have capacity for, some Computes Mesh members can be spun up automatically in EC2 and put to work.
- Time passes.
- Got it!
Authorities have been updated with the license plate, time of entry, and numerous other details, in real time as the algorithms get a hit. A final report is also compiled for later review.
We didn’t need to go to the cloud if we didn’t want to, and we only sent work out to our most expensive resources when absolutely necessary. We can keep our sensitive data behind the firewall, and use best of breed algorithms to do the heavy lifting instead of shipping data off to a fixed API.
TL;DR — Computes Mesh allows you to bring your algorithms close to the data, instead of shipping data all over the place. It allows you to utilize latent processing power that would be impractical for deploying web APIs. Chaining tasks allows for branching computations that can be broken up into small pieces and easily deployed in a hybrid environment.