Realtime IOT video analysis on OpenShift

Andrew Stoycos
4 min readMar 3, 2020

--

A few weeks back I wrote a post about putting together a new IoT stack using Enmasse and Knative as the core technologies. After that it was time to demonstrate its full functionality with a real use case. I spent a few days trying to decide between building my own IoT setup with the help of Raspberry Pis, Arduino’s, and other other low resource connected devices, or to build a simple codebase to simulate an IoT device. For the demo I fought off the instinct to nerd out with a bunch of devices and instead stuck with K.I.S.S (keep it simple stupid) by building a basic IoT camera simulator. It can pull video from any Youtube live stream and push it to an Enmasse device endpoint via HLS (HTTP Live streaming). In this case I decided to use a live stream of Jackson Hole Wyoming mainly because of this article.

Next I worked on writing a custom Knative service. Basic instructions for doing so can be found here, however with more complicated applications it can get confusing. For this demo the service needed to handle a larger workload:

  1. Listen for CloudEvents from my IoT container source containing the live video stream
  2. Run Realtime video analysis using the Tensorflow object detection API
  3. Display the interpreted video back to the user via an external webpage

The realtime video analysis was completed using a pre-trained model titled ssd_mobilenet_v1_fpn_coco from the Tensorflow Detection Model Zoo. I chose this model because it offered the best trade off between accuracy and speed. Other quicker models were also experimented with, which allowed the service to handle a faster fps but resulted in much lower rates of overall detection. Feel free to plug and play any of these models into the service by simply modifying the variable model_name in app/analysis.py

Now If you know anything about serverless, or if you read through Knative’s runtime contract you might be thinking, “What is this guy doing processing such a big workload with a serverless function,”. If not here is a key portion of the contract

In general I would not currently advise running Tensorflow within a Knative service based on the rules shown above, due to its slow startup time and resource intensive nature. I may be braking some guidelines, but this demo was meant to push the limits and see what was possible. Also, apart from a slower startup time the contract is generally followed. The processing is relatively stateless and for normal use I assume individual sessions will last for a relatively long amount of time meaning CPU usage will be constant following startup.

Regardless, I ended up being able to successfully implement the service which can be seen shaded blue in the overall system diagram shown below.

IoT Realtime Streaming Workflow

The Biggest implementation challenge involved networking between the iotContainerSource and iotKnativeService, since Knative requires that the service only has one exposed port. This becomes an issue since the application required one port for listening to new CloudEvents (port: 8000) and another for serving web requests (port: 8080). This is a design choice that again reflects Knative’s runtime contract stipulations such as short startup time and low resource use. However, I believe it limits the usefulness of Knative, and encourages the use of persistent data stores. The temporary fix is to manually set the iotContainerSource’s sink variable to the service’s podIP using

export IP=$(oc get pods — selector=’serving.knative.dev/service’ -o jsonpath=’{.items[*].status.podIP’})`

and then to manually reconfigure the container source every time the service is brought up via the web browser. This works but is by no means a permanent solution since the IP in iotContainerSource needs to be re-configured every time the service scales to zero and then comes back up again.

The code and instructions for firing up the entire demo can be found in iotKnativeSource github, and a full video showing the demo in action can be found below.

https://www.youtube.com/watch?v=Xs_rn6SNMlw&feature=youtu.be

Moving Forward….

Next steps for this stack will involve the integration of two major components

  1. A persistent data store for IoT data that needs to be processed at a later date
  2. The Integration of OpenDataHub to take care of larger AI/ML workloads involving the IOT data.

Also, If I get some extra time it would be great to see if I could speed up the real time video analysis with the use of an even better model or other video manipulation tricks.

Thanks for reading, and feel free to reach out with any questions!

Git Repos:

iotContainerSource Source: https://github.com/astoycos/iotContainerSource

iotDeviceSimulator Source: https://github.com/astoycos/iotDeviceSimulator

iotKnativeSource Source: https://github.com/astoycos/iotKnativeSource

--

--

Andrew Stoycos

Software Engineer currently working at RedHat based in Boston, Fascinated with cloud technologies and the industry’s push towards Edge computing