Deploy GPU-enabled Kubernetes Pod on NVIDIA Jetson Nano

Jerry Liang
May 6 · 4 min read

NVIDIA Jetson Nano delivered GPU power in an amazingly small package. I finally got mine in the mailbox and couldn’t wait to add it to my Raspberry Pi K8s cluster to take up GPU workload. It turned out however, I had to jump through a couple of hoops to get it working. In this blog post, I will walk through the steps needed and hope it will help you to get it working.

There are two things to do to enable GPU support:

  • Recompile Jetson Nano’s kernel to enable modules needed by Kubernetes (K8s) and in my cluster, weaveworks/weave-kube, too
  • Expose GPU devices to the containers running the GPU workload pods

Recompile NVIDIA Jetson Nano’s Kernel for K8s

There are many fantastic guides to recompile the kernel on Jetson devices. I would recommend reading Hypriot blogor NVIDIA’s forum, since all the steps mentioned can be done directly on Jetson Nano itself.

If you follow Hypriot blog and would like to use weave-kube like me, you can use a copy of the config I used here which added a few extra kernel options: https://gist.github.com/direbearform/58349ae5ee7ddc1687b1019112746140.

In comparison to the stock kernel, here is the list of extra options enabled:

After rebooting, do a few modprobe to ensure that the required modules were indeed loaded (no output from modprobe means they are loaded okay):

Now, you can join the Jetson Nano to your existing K8s cluster.

Create K8s pod with GPU support

As pointed here, NVIDIA’s nvidia-docker is not supported on Tegra devices and they do not plan to make it so either. To workaround it, we need to give the container direct access to the GPU devices:

We could not directly use the docker — device parameter in K8s, but we can use volumeMount as a workaround. For a simple GPU availability test, you can use device_query container image from Tegra-Docker.

Here is an example of K8s deployment yaml to pass those devices thru, assuming you built the device_query container and pushed to a private registry that K8s has access to, and labeled the Jetson Nano node with devicemodel=nvidiajetsonnano:

This is the result you will see once you deploy the container and inspect its log:

Conclusion

If you would like to create or join your new NVIDIA Jetson Nano to a K8s cluster especially with weave-kube, be prepared to recompile and deploy the kernel to enable a few options related to iptables. Once K8s cluster is created successfully, you will also need to pass the GPU devices on Jetson Nano directly into the container in the pod deployment template, given that the official NVIDIA docker plug-in does not currently support Jetson device family sporting the Tegra processor.

For detailed steps for kernel recompilation and usage of Docker and K8s, please refer to a few great blog posts linked in this write-up. I hope achieve here is to fill gaps and help to you get Jetson Nano GPU workload on K8s working end to end. Feedback or discussions are welcomed in case anything was not clear or did not work out well.

Jerry Liang

Written by

Turn ideas into reality