Docker Basics for Beginners with a complete workflow demo for Yolov5 (Part 1)

Pranjall Kumar
12 min readDec 29, 2022

--

Hi reader!

Confused with Docker? need a quick start? read through and I am sure that most of your questions, doubts, or dilemmas will get answered!

We will run Yolov5 in a container!

Image by the author, of Docker Desktop

Introduction

I remember myself as a fresher directly getting exposure to docker. And based on my limited practical knowledge, understanding docker became a little tedious task. Now that I have had regular exposure to it for about one and a half years, I am pretty confident writing a blog about it to help others get a quick start into this if they are struggling or want to know what, when, and hows of it. I will also show docker in action step by step along with how I work with it. So before I begin my blabber, Let me tell who this read is mostly suitable for:

  1. Students who are about to graduate and are looking for a job in the IT sector or freshers in this industry. Considering the gap between the institute's knowledge provided to students and the industry expectations, having experienced it firsthand, I am sure this quick read will definitely benefit them the most! as I would like to give insights on some industry standard ways of using it.
  2. People who have just started to use docker or are looking forward to using it. If you are confused about why to use it? how to use it? when to use it? or if simply your job wants you to use it and you are getting started, then this read is a good beginning.
  3. People who are looking for quick ways to clear basic concepts in this topic.

What is Docker?

I was very recently, about 1 month ago, asked by my senior colleague this:

What is docker?

I was very comfortable with it and was also doing a lot with it, but the question got me dumbfounded as I just could not summarize my understanding in a simple sentence. So I ended up saying:

Docker is an application… and… we use it! 😅

Obviously, I was told to go back and read about it more! So let's get the first thing out of the way. Now if someone asks me, “ What is Docker?” I say this:

Docker is a client-server-based, application platform that helps build various kinds of software independent of the exact development infrastructure.

Now this the not an exact statement you will find when you google this question. I have purposely added client-server here because I didn’t know this until I was asked to go back and read more about Docker by my senior colleague. Yes! even after more than one year of working in it! and yes it is very important!

I have chosen these words carefully. and will come back to explain it in detail soon. But this is obviously in no way a textbook definition. Just my thoughts.

Docker and its concepts are an ocean in itself. There is the elaborate official documentation for Docker but it is so huge that I can see why people can struggle since I did myself too! and still do sometimes actually. Hence I felt the need to write this blog.

Need for docker

Docker provides many benefits but the main goal of docker was to solve this issue:

But it works on my system… 😥

Before docker, if you were developing an application then you would make sure to fine-tune your infrastructure to precisely support the needs of that application by for example installing required libraries and dependencies on your local infrastructure (like your working laptop), setting environment variables and making a local environment, so on and so forth.

Naturally, this created a problem in that all these settings had to be replicated everywhere wherever this application had to run. Forget about the issue of migrating this application from let's say Windows to Linux. So then, a developer or a group of developers would create this application on an agreed-upon framework for development. They would then replicate these settings on the testing machines for testers hoping all their machines needed exactly the same steps that were required for them to do so. And then the same thing would be done for their customers. Imagine the chaos! for initial setup. Moreover, they hope the documentation created for their customers for this setup stayed consistent over a good period of time!

How good it could be if all these settings could just be written in a file, and some magic would take care of making sure all these settings will always be properly installed in whatever system is needed. This is exactly what docker does!

Basically, it will create a new small computer in your computer, taking some of your current system’s RAM, processing time, permanent storage, and other hardware required for a computer to be functional and installing an operating system like Ubuntu (usually small and lightweight) in that computer without its GUI. In the docker world, this type of small computer that got created in your system is called a container.

You might be aware of the Virtual Machine concept doing a similar thing but with the GUI too if required. However, a docker container can be made very lightweight and a bare minimum computer which is probably suitable to run just the application you are developing and maybe nothing else if you want. This approach also comes with quirks of security for the application.

So now you can have many mini-computers (containers) in your local system, each running an application. Also, each of these is customized solely to run that application. All these customization settings are stored in a file called a docker file (literally, you make a file called ‘Dockerfile’ or ‘Dockerfile.amd64’). Moreover you can also make these containers to interact with each other if required.

Now given Docker is installed in another machine based on Windows, Linux, or whatever, Docker will create a container on this machine based on the settings provided in the docker file. And all the other dependencies required to make sure that the container properly runs on the local machine are taken care of by the magic box of docker!

Docker Engine

You can google: install docker on windows, install docker on ubuntu or whatever and get it in your system plus verify if it is installed properly or not. If you are a complete beginner just get the docker desktop app that will help you get onboarded easily with a nice GUI to interact with. Here is the link: https://docs.docker.com/get-docker/

But what actually happens when you “install docker”?

You actually install a docker engine which is the core part. Docker Desktop is simply a GUI application that helps you interact with this docker engine. So you can just install the docker engine and skip the docker desktop entirely! This is exactly what I will be doing here later.

And now comes the client-server part. The docker engine is actually a server that starts running on your system when you open your docker desktop. Notice the screenshot:

Image by the author, of Docker Desktop

This docker engine is the one that helps you containerize your application. The fancy word for independent of the exact development infrastructure. Remember?

Docker is a client-server-based, application platform that helps build various kinds of software independent of the exact development infrastructure.

The docker engine is a daemon process which means it runs continuously in the background and is always available, until obviously when explicitly stopped.

Docker Basics

In the docker world, you will hear a lot of times the terms, Docker Container, Docker Image, Docker Volumes, and Docker Compose. These are the things I will cover in these two blogs complete with a demo which I think will provide a good platform to go deeper into your exact requirements.

Docker Container

As explained before, it is the runtime environment created for your application to run in.

Docker Image

A docker image is created from the settings mentioned in the Dockerfile. It contains all the information and configurations necessary for your application to run in a docker container and for the container to run on your local system too!

So, you mention the settings required only for your application in the Dockerfile. But the Docker Image also contains information on how to create a Docker Container on your specific local system required for your Application (the hard part!).

Here is the flow:

Image by the author, Docker flow

I will cover Docker Volumes and Docker Compose later in part 2.

Demo

Now the fun part! let's get all hands on!

Introduction

I am making a minimum version of Ubuntu 22.10 called Kinetic as a virtual machine with no GUI on my personal Windows system as I have gotten comfortable with the Linux environment now (I know all the commands from the back of my head).

Here is my starting point (Before I turn off the Ubuntu GUI): I used VMware workstation player 17, which is free software for making virtual machines for non-commercial use. And VScode is the development tool of my choice connected to the Virtual Machine via SSH.

Image by the author, VScode and Virtual Machine

Yeah, I could have installed VScode directly in the virtual machine but this was a nifty little SSH concept I wanted to show off. Could be useful someday. Try to replicate this for fun by observing the screenshot. Also, you will be interacting with containers which will not give you a GUI, so just to show you why I used a ‘small computer’ analogy for containers.

Now I will install only the docker engine on my virtual machine. I am following this documentation https://docs.docker.com/engine/install/ubuntu/

There must be one for your OS too if it's not Ubuntu.

Image by the author, Installing docker-engine

If all went well, this is how the output from the hello-world image would look on your console. Notice how it was unable to find the hello-world image locally so it pulled it from the internet (Docker Hub, a one-stop shop for all kinds of useful docker images). Also, keep in mind that the docker run command will do the docker build command in the background if it can’t find a built image.

Important commands

we can see the list of all docker images with us in the system using:

docker image ls

Don’t forget to add the docker group to your user and restart like this to avoid using ‘sudo’ before docker every time.

Image by the author, Adding docker group to USER
Image by the author, List of all Docker Images

Similarly, we can do the same for all running containers.

docker container ls

or

docker ps
Image by the author, List of all running Docker Containers

Notice this hello-world container stops automatically when it completes execution. hence we do not see any running containers. to see all containers not just running ones, use the ‘-a’ flag.

Image by the author, List of all Docker Containers

we can remove the unnecessary images using the ‘image rm’ command and also the containers by using just the ‘rm’ command.

docker image rm <Image Name>
docker rm <Contanier Name>

#TAB key can be used to auto complete name
Image by the author, removing Images and Containers

Notice how I had to remove the container first, only then the image used by that container can be removed.

You can also prune the system as and when you need to clean up or free up storage space used by docker. It will remove all unused images and containers in one go.

docker system prune

Making custom containers

Now let’s make a custom application that will run as a docker container. Making the application is not the focus here, containerizing it is!

So what do we do?

Step 1: Ask yourself what is your application (Blog_Demo). For us, it’s using the Yolov5 PyPI library, where you can use various Yolov5 models or train your own and do object detection!

Step 2: Decide the base Docker Image for your application. This decision is based on many things like what the application is, where the application will be deployed, what the expected runtimes are needed by your application, what resources your application needs, etc. I have an RTX 2070 which is not CUDA compatible but still I will use the TensorFlow-GPU docker image from the docker hub. Here is the link: https://hub.docker.com/r/tensorflow/tensorflow

Step 3: Make the Dockerfile. Let’s make a proper workspace and add a Dockerfile to it.

Image by the author, Local working directory

A Dockerfile needs the mention of a base image it will use. it could be Ubuntu, Debian, Alpine, or whatever you need. For us, it’s the TensorFlow image which is based on Ubuntu 22.04. Here is what most Dockerfiles look like in general.

# importing the latest tensorflow base image with GPU.
FROM tensorflow/tensorflow:latest-gpu

# set the working directory in the container
WORKDIR /usr/src/app

# copy the file requirements.txt from the current local working directory into the current container working directory
COPY requirements.txt .

# run the ubuntu update command in the container and install the PyPI requirements mentioned in the requirements.txt file
RUN apt update -q && pip install -r requirements.txt

At this point the application has these files, where requirements.txt is just this and Dockerfile is as shown above:

Image by the author, Requirements for the application

Now we build the Docker Image using docker build. we can give it a tag of our choice like so:

docker build -t yolov5:7.0.5 Blog_Demo 

Make sure to get the path to run the docker build command right!

Image by the author, Building docker image

If all went well, in the end, your console will look like this:

Image by the author, Image built successfully

let’s check and confirm the image is available or not:

Image by the author, Available docker images

Now, let's run the image! we want to run the container indefinitely so we use, ‘sleep infinity’. We will also detach the container using the flag ‘-d’ which means it will free up the present console for further use. I will also give it a name: Yolov5 using the ‘ — name’ flag. Otherwise, docker will give it a random funky name as we saw earlier. Use ‘ — gpus all’ to expose the GPU to the container or exclude it if you just want to use the CPU like me. Note: CUDA needs to be installed in development system.

docker run -d --gpus all --name Yolov5 yolov5:7.0.5 sleep infinity

if successful, we can see the container running.

Image by the author, Running docker containers

Now we can connect to it using the docker ‘exec’ command. ‘-it’ flag helps to connect as an interactive terminal. Notice our requirements.txt file is exactly in the working directory where we told docker to put it.

And this is it! we have the Yolov5 library installed in a small computer running on our local computer! How to use this Yolov5 library is out of the scope of my little demo. but there are plenty of resources out there. you can check the PyPI documentation: https://pypi.org/project/yolov5/

For fun just check some python:

Image by the author, Printing NumPy array

You can exit the interactive terminal using the command ‘exit’ and can stop the container using the docker ‘stop’ command.

docker stop Yolov5

Thanks for reading! I hope it helped. Next up, Docker volumes, Docker compose, and Staging builds.

Bye!

--

--

Pranjall Kumar

Research Engineer at Siemens. Free time is photography time.