“Docker allows you to package an application with all of its dependencies into a standardized unit for software development.”
Before I tell you about Docker I will tell you a story that every developer has experienced at least once.
Once upon a time, there was a young developer who was quietly coding on his computer. He was in a hurry because he had to present his work the next morning. After hours of work, the application was there, and it worked perfectly! The next day, our coder arrived proudly for his presentation, with his project on a USB key. He transfers it to his friend’s computer and there, it doesn’t work!
What is the problem?
Our developer’s application doesn’t work on his friend’s computer because of an environmental problem. Between two systems, there may be version differences on dependencies or missing libraries.
Here, our problem is limited to 2 systems, but imagine a team of 10 persons with computers under Mac OS, Linux, or even Windows, a test server under Ubuntu, and a production server under CentOS. Making sure their application works well on all these environments can be a real nightmare!
But fortunately, there are solutions and among them we have Docker.
What is Docker?
Docker is a platform that will allow you to execute your code inside a container independently of the machine you are on! A Container looks like a virtual machine except that it doesn’t carry a whole operating system with it, which allows it to run in a few seconds and is much lighter.
So Docker can solve our environmental problem, because no matter which machine we use, the code will run the same way.
Installation and first hands-on
For the installation guide of Docker according to your system, I redirect you to the official website.
After that, you can check if docker is installed by running the command
Run your first container
You can run your first container by running the
docker run hello-world
To run this image, docker tries to find the name of the image locally, if it exists it will launch it directly into a new container but if it doesn’t exist, Docker will launch the pull command
docker pull hello-world that pulls the image from the docker hub and then it will run it into a new container.
To list all local Docker images you have just to run
To delete an image use
docker image rm REPOSITORY for example
docker image rm hello-world
To list all the running containers you just need to run
docker ps and to see all the containers even if they are stopped
docker ps -a
To stop a running container you just need to run
docker stop CONTAINER_ID
To delete a container you just need to run
docker rm CONTAINER_ID and
docker rm -f CONTAINER_ID to force deleting a running container. But I don’t recommend this. It’s better to stop the container before deleting it.
For more docker command details visit the Docker reference.
Docker provides developers with an online service, called the Docker Hub, designed to facilitate the exchange of containerized applications. Hosting more than 100,000 container images, this space is also integrated into GitHub. It covers a wide range of areas (analytics, databases, frameworks, monitoring, DevOps tools, security, storage, operating systems….). Qualified as official, some images are directly maintained by Docker. Others are proposed by contributors. Tested and proven by Docker. The San Francisco-based company also markets a version of the Docker Hub that can be installed locally (the Docker Hub Enterprise). Finally, Docker has launched an online application store. Objective: to offer publishers a commercial channel to distribute their applications in the form of containers.
When you click on any image you will find a page like this
You copy and paste the command
docker pull nginx then docker will start the pull of the Nginx image to your local workspace.
By giving only the image name to the pull command docker will pull the latest existing version. So if you want to get a specific version you need to use
Tags. For example, if you want the get version number 1.18 you run
docker pull nginx:1.18
When the pull finish you can check you locale images using
docker images You will find 2 images with the same name but with a different tag.
To run a specific image it is like the pull command you just need to add the tag after the image name
docker run nginx:1.18 or
docker run nginx to run the latest version.
You can see details about a docker image using the command
docker inspect nginx
Docker can build images automatically by reading the instructions from a
Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using
docker build users can create an automated build that executes several command-line instructions in succession.
Build your first image
Let’s create an image for a java program and run it inside a container.
I’ll use a simple java program that shows “This is java app using Docker” when you run it.
now let’s create our Dockerfile with the following instructions.
Once we have these two files, we are ready to build our image for the java program by just running
docker build -t IMAGE-NAME:TAG . in our case, it will be
docker build -t my-java-app:1.0 .
Now let’s see what images we have using
We have our image my-java-app and another one called
openjdk. Rember is the docker file we have
FROM openjdk:8. Our image is based on another image.
We will need the
openjdkimage every time we lunch a build for our java app.
Push your first image to the docker hub
Once your image is ready, you can push it to the docker hub. For this, you need to have a docker account. If you don’t have a Docker account, sign up for one here.
Then log into the Docker public registry on your machine using
docker login -u USERNAME -P PASSWORD
Now you need to add a tag to your image before pushing it using this command
docker tag my-java-app:1.0 bksofiene/my-java-app:1.0-Final and finally, run
docker push bksofiene/my-java-app:1.0-Final
Now as you can see, my image is uploaded to the docker repository and it is accessible anywhere at any time. You can change the image restriction to make it private if you want.
For more details about the Dockerfile visit the Dockerfile reference.
Containers do not have a public IPv4 address, they have only a private address. Therefore all services running in a container must be exposed port by port.
Container ports must be mapped to the ports of the host to avoid conflicts.
When Docker starts, it creates a virtual interface called
docker0on the host machine with an IP address randomly allocated.
To see existing networks you just need to run
docker network ls
Docker by default have 3 networks:
- Bridge: It is the Docker default networking mode which will enable the connectivity to the other interfaces of the host machine as well as among containers.
- Host: In this mode container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s hostname will match the hostname on the host system.
- None: This mode will not configure any IP for the container and doesn’t have any access to the external network as well as for other containers.
Now if we want to create our own network we just need to run
docker network create NETWORK_NAME
let’s do a little exercise to understand things better.
We are going to create an application with 3 containers in a private network :
Web1: an unmapped
Nginx server that should display Server ONE
Web2: an unmapped
Nginx server that should display Server TWO
myhaproxy: a mapped
haproxyserver, which will receive in FrontEnd the HTTP requests of the clients on port 80 of the Docker Host and which will redirect in BackEnd to the web servers.
HAProxy is a free, open-source high availability solution, providing load balancing and proxying for TCP and HTTP-based applications.
In our case, we will use it for a round-robin load balance.
I’ll start by creating 2 folders,
Web2 inside the first one, I’m going to create an HTML file called index.html that contains Server ONE and inside the second one I’ll create the same file but with Server TWO as a value.
inside each folder, we need to add the
Now let’s build images, if you are inside the
docker build -t server1 . if you are inside the parent folder run
docker build -t server1 web1 Same for the
docker build -t server2 . inside the folder or
docker build -t server2 web2 inside the parent folder.
Now let’s create the
haproxy image, first of all, we need to configure the
Then we create the
Now let’s build our image
docker build -t myhaproxy myhaproxy
After that make sure you have all the 3 images and the
Finally, we run our images:
docker run -d --network=web --name=web1 server1
docker run -d --network=web --name=web2 server2
docker run -d --network=web --name=haproxy -p 80:80 myhaproxy
-d : is for detach (run container in background),
--network : to define our network,
-p : for exposing port
curl 0 many times. you will get a response from a different container each time.
For more details about the docker network visit Docker networking.
By default, the data in a container is ephemeral. So we have to find a way to save it. That’s why Docker created the volumes for us.
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
To create a volume, we will use the following command
docker volume create test-volume and then use
docker volume ls to list all existing volumes
To get details about the created volume use
docker volume inspect test-volume
As shown in the Mountpoint value our data will be stored under
Now let’s try to get into this directory and create a text file called test.txt and put This is a test for a volume use case inside.
cat >> test.txt then type your text then close the input mode using
Let’s go back to our work directory using
cd ~ and create a docker image that accesses to the
test.txt file and edit it.
Let’s create our Dockerfile
Then build the image using
docker build -t test . then run the image using
docker run -it --name test_container -v test-volume:/data test
--name : is the name of the container,
-it : to run the shell mode
-v : to mount the volume
You will find yourself inside the working directory
Now you can read and edit the
test.txt as you want.
The file value can be accessible by all containers that use the
Finally to delete the volume use
docker volume rm VOLUME-NAME But make sure you stopped all containers that use the volume before deleting it.
For more details about volumes visit Use volumes.
Docker Compose is a tool for creating and managing multi-container applications.
All the containers will be defined in a single file called
Each container manages a particular component/service of your application and the docker-compose run them using only one command.
For the installation guide, I redirect you to the official website.
docker-compose version to see your docker-compose version.
Now let’s take the Load balance exercise and try to automize it using docker-compose.
we need to have the same folders
Now create the
docker-compose up -d
-d : for the detach mode.
To see your running containers use
Now once your containers are running you can test by using
curl 0 .
For more details about the docker-compose visit the docker-compose reference.
With Docker, you can multiply the environments on your machine without limiting the performance of your computer. Resources are shared with the host machine! Each environment can be easily configured thanks to its Dockerfile present at its root.
The purpose of this article was to introduce you to Docker and to help you better understand the solutions that can be brought to the various problems of developers.