Docker in Jetson nano for training AI model
What is Jetson nano?
Jetson nano is a computer without a monitor, mouse, and keyboard. It is just a pure CPU and GPU integrated machine. Since Jetson nano is a small edge device with powerful computation, it can deploy an AI model on the robots. It would consume the whole space to put your laptop in the robot right?
Not only in the robotics field use Jetson nano but also in the computer vision (CV) field. When the CV engineer needs to get the CCTV stream and run a model (inference) in real-time, it’s time for Jetson nano!
Why docker?
When you need to reproduce your product, docker helps you to get rid of all installation, setting up, and dependency of your programs. Docker is virtualization that simulates the operating system (OS). Let’s say you have a python script with many libraries. All you have to do is put that script and its libraries in docker, then you just open docker in another machine and run through.
Let’s walk through step by step
0. Install Docker for Jetson nano
Fortunately, docker has already installed after you flashed the Jetson with Jetpack. The version I used is JetPack 4.5 (L4T R32.5.0), the latest version available.
Run docker command without sudo (optional)
This step may be optional, but it will save you a lot of time. Unless you have to type sudo before run any docker command and type the password all the time. Please follow this link to disable sudo.
1. Pulling docker image
Nvidia has provided us many wonderful resources, including docker images for AI purposes. Just open the Linux terminal in Jetson nano and run this command. Please note that the tag should match your Jetpack version. Check this link to see your tag.
Here is the basics docker image. If you just a first-time user, I recommend using the Getting started with Jetson nano image. There is also a course provided.
Docker image used in Getting started with Jetson nano course
docker pull nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0
Docker image for machine learning
docker pull nvcr.io/nvidia/l4t-ml:r32.5.0-py3
or just pure basic docker image
docker pull nvcr.io/nvidia/l4t-base:r32.5.0
After pulling, check the images with this command
docker images
**Noted that you can run the docker command anywhere in the terminal.
2. Run docker container from docker image
After we got the docker image, we need to make a container based on that image. Inside a container, you can make some changes to the image or run a program. Run this command to run the container.
docker run --runtime nvidia --network host --name <name> --volume <local dir>:<container dir> -it <images name>:<tag>
example
docker run --runtime nvidia --network host --name demo_container --volume ~/Desktop/data:/workspace/data -it nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0
- - -runtime nvidia will use the NVIDIA container runtime while running the l4t-base container
- - - network host allows the container to use your Jetson host network and ports
- - -name for new container
- - -volume sync the local directory with container directory
- -it means run in interactive mode
The volume part might confuse you a little. When you make a container, it like you make a new world. You cannot access the file in it unless you sync with your local world. From the example above, you sync /workspace/data with ~/Desktop/data that means if you have another directory in /workspace like /workspace/code, it will not show up in your local computer.
Congratulation!
Now you should get inside the container already. You can git pull your code or pip install more library. In suggestion, run this command in your working directory to make requirement.txt
python3 -m pip freeze > requirements.txt
and just install it all in the docker container
python3 -m pip install -r requirement.txt
3. stop and execute docker container again
To go out from the container, press Ctrl + D or type exit.
Run this command to check the container status
docker ps -a
Stop the running container to free RAM space
docker stop <conatainer name or ID>
If you want to use it again just start it
docker start <conatainer name or ID>
To get inside the docker container again don’t docker run it because it will create a new container. Just execute it with this command.
docker exec -it <docker container> /bin/bash
Now you should go inside the new world again.
More suggestion
To run a model on Jetson in the most efficient and optimized way, tensorrt and pycuda will help you shrink the model size and accelerate computational speed.
Most of the AI courses both on the internet and in university focus on the training section but did not teach in the deployment section. This docker part is just a fraction of the deployment stage. It also depends on what you will do with Jetson nano. For the CV filed, there are collecting streams from IP cameras, push the result to the database, auto-restart the system with systemd, etc. Hence, as an AI engineer, to deliver a full product, you should able to do the full pipeline or at least knowing what should be included.