Containerizing Nvidia DeepStream Apps

Utkarshkushwaha
4 min readAug 19, 2023

--

Imagine a situation in which real-time video streams from security cameras are intelligently analyzed to find objects, identify anomalies, and make important judgments while smoothly scaling to suit the needs of a busy metropolitan setting. Thanks to advancements in technology like Nvidia DeepStream, this futuristic vision is quickly becoming a reality.

But as video analytics applications become more complicated, so does the demand for effective deployment and administration methods. Containerization can help in this situation. DeepStream applications can achieve unprecedented heights of flexibility, scalability, and repeatability by being enclosed within small, portable containers. Other than this the amount of time required to set up an environment in order to work with Deepstream is still a lot, and working with Deepstream containers can also help you save that setup time, and focus more on the development part.

In this article, I’ll be covering the issues that I faced during the containerization of Nvidia Apps, things you need to know before dockerizing Deepstream Apps, and how you can Dockerize Nvidia Deepstream Apps and run and develop Deepstream pipelines on any Device running on Nvidia GPU. (*which is compatible and capable of Handling Nvidia Deepstream)

So I am assuming two situations here :

Category A. You have already developed a DeepStream App on your local machine and now want to containerize it
Category B. You want to get started with Nvidia DeepStream and Instead of getting into the hassle of installation and everything you want to run Deepstream on a Docker Container.

Pre-requisites:
1. You should have an idea of what Nvidia Deepstream is and some experience in building Deepstream pipelines
2. Know what is docker, containerization, etc. (You can check out NetworkChuck or KunalKushwaha, I found their tutorials really helpful)
3. Basic Linux Commands

Let’s get started :

  1. Install the Docker engine on your machine, since Nvidia DeepstreamSDK is only available on Linux, you can simply follow the installation steps from here and I think you’d be good to go.
  2. Installing the Nvidia Container Toolkit: If you want your containerized apps to leverage the compute capabilities of Nvidia GPUs, you’ll have to install this (basically you can't think of running Nvidia Deepstream in containers without having this installed on Host). The NVIDIA Container Toolkit allows users to build and run GPU-accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.

Follow this guide: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html to install Nvidia Container Toolkit on your device (Comment down if you think I should write a dedicated blog on container toolkit installation)

Image Credits: Nvidia-Docs

Now if you belong to Category B (as mentioned above), the steps are pretty simple ad straightforward. You can follow the steps given below in the order in which they are written and you can start building Video Analytics Pipeline on a container.

# Pull the latest DeepstreamSDK image from Nvidia Container Registry 
#(I think this can act as good base image has all dependenices to run
# .. Deepstream Apps you only need to install python bindings if you want
# .. to work with DeepstreamSDK using Python)

sudo docker pull nvcr.io/nvidia/deepstream:6.3-sample

# Run the container using this image, in interactive mode, and start playing
# .. with Deepstream

sudo docker run -it --runtime nvidia --gpus all <imageid or deepstream:6.3-sample> /bin/bash

The first command was to just pull the image from Nvidia Container Registry, but if the second command got you all confused, let me break it down for you.

  1. docker run: Run a new Docker container.
  2. -it: Allocate an interactive terminal session within the container. This allows you to interact with the container's command-line interface.
  3. --runtime nvidia: Specify the NVIDIA runtime for the container. This is used when the container needs to access NVIDIA GPUs, this is why we installed Nvidia Container Toolkit before.
  4. --gpus all: Make all available GPUs accessible to the container, you can either set it to all, or any specific GPU accessible to your container
  5. <imageid>: Replace <imageid> with the actual ID or name of the Docker image you want to run. This specifies the base image for the container.
  6. /bin/bash: Start an interactive Bash shell (/bin/bash) inside the container.

For the folks belonging to Category B, who already have developed a DeepStream app on their machine and now want to containerize it for either deployment or sharing the entire project with other teams.
You can create your own image using Dockerfile, where you can specify the installation for all other dependencies of your project place it in the directory of your project, and either run this command,
sudo docker build -t <name>:<version>

Or Just Add a docker-compose.yaml file in the same directory, using this file you can also take care of network configuration, and volume management if needed. (if these terms sound foreign to you, then I’d suggest checking out some Docker Tutorial first) , after that, you can simply run
sudo docker-compose up

Now you must be thinking about how to prepare a Dockerfile for containerizing Deepstream Apps: Here’s a sample Docker file for dGPU type device

# Choosing a deepstream base image
FROM nvcr.io/nvidia/deepstream:6.3-samples

#setting the workdir
WORKDIR /app

#copy the contents of cwd to image
COPY . .

#Like this you can add one or more packages to install that is needed by your app
RUN pip install xyz_package
RUN pip install abc_package

# To get video driver libraries at runtime (libnvidia-encode.so/libnvcuvid.so)
ENV NVIDIA_DRIVER_CAPABILITIES $NVIDIA_DRIVER_CAPABILITIES,vide

References : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_docker_containers.html

--

--