Run Detectron2 Inside Docker Container
We can make a docker container for Detectron 2, this container-based approach is much better than just creating Virtual Environments. The docker ensures whatever service is running on my machine can run on the machine all over the world without breaking.
We will create a Dockerfile which will first set up CUDA and then install detection 2. One main benefit of creating Dockerfile
is that we can run it on a different kind of Linux distribution be it Ubuntu or Redhat. This is useful for deployment.
If you don’t have docker on the machine, set it up by using
On Ubuntu
sudo snap install docker
On Redhat
sudo yum install docker
Create Dockerfile
Step 1: Select a Base Image and install basic packages.
Here we select an image with Nvidia Cuda
FROM nvidia/cuda:10.1-cudnn7-devel
This next line prevents the installer from opening dialogue boxes during installation which stops the errors.
ENV DEBIAN_FRONTEND noninteractive
Install opencv, python3-dev and some other essentital packages
RUN apt-get update && apt-get install -y \
python3-opencv ca-certificates python3-dev git wget sudo \
cmake ninja-build && \
rm -rf /var/lib/apt/lists/*
Create a symbolic link for python3 to python, i.e now running python will by default invoke python3.
RUN ln -sv /usr/bin/python3 /usr/bin/python
Step 2: Create a non-root user
Docker by default gives us root privileges, but we don’t want to work in the root environment all the time so we will create a non-root user,
here, ARG is a build-time variable. They are only available from the moment they are ‘announced’ in the Dockerfile with an ARG instruction up to the moment when the image is built.
ARG USER_ID=1000RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudoRUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoersUSER appuserWORKDIR /home/appuser
Add local user path to the path
ENV PATH="/home/appuser/.local/bin:${PATH}"
Install Pip
RUN wget https://bootstrap.pypa.io/get-pip.py && \
python3 get-pip.py --user && \
rm get-pip.py
Step 3: Install Dependencies
See https://pytorch.org/ for other options if you use a different version of CUDA
Install dependencies for installing detectron-2
RUN pip install --user tensorboardRUN pip install --user torch==1.6 torchvision==0.7 -f https://download.pytorch.org/whl/cu101/torch_stable.htmlRUN pip install --user 'git+https://github.com/facebookresearch/fvcore'
Step 4: Install Detectron 2
Clone the detectron 2 repo from GitHub
RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo
Set FORCE_CUDA because during `docker build` Cuda is not accessible
ENV FORCE_CUDA="1"
This will by default build detectron2 for all common Cuda architectures and take a lot more time because inside docker build
, there is no way to tell which architecture will be used.
ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing"ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"
Install detectron 2 via pip
RUN pip install --user -e detectron2_repo
Change Working Directory to /home/appuser/detectron2_repo
WORKDIR /home/appuser/detectron2_repo
You can see the full Dockerfile
here. Download it using
wget https://gist.githubusercontent.com/shashank2806/303e3ae90688c133816668179008dd1b/raw/36cc372857d6d12f3028814bb4732d111cfe6e25/Dockerfle
Build docker image
docker build --build-arg USER_ID=$UID -t detectron2:v0 .
Run it
docker run --gpus all -it \
--shm-size=8gb --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--name=detectron2 detectron2:v0
Sometimes you me encounter an error like this while running the container
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
In this case, we can use the nvidia-docker
. Install nvidia-docker
with help of this article on ubuntu
and then run this to build
sudo nvidia-docker build -t detectron2:v1
and then this command to run the container
sudo nvidia-docker run -it --name detectron2 detectron2
And that’s it. Happy Dockering!!