Getting started with Docker

Docker is an umbrella term to introduce a set of complementary tools that works together to provide you a high level API to automate the shipping and deployment process of your application. Docker simplifies the deployment process and add meaning to devOps practices with containerization, process isolation, and union filesystems. So let’s understand the concepts of docker before we get into the implementation details.

Docker Image

Docker image is the basis of containers. It’s a collection of layers stacked on top of each other. Each Docker image references a list of read-only layers that represent filesystem differences. Think of it like the jar file for java applications, you create one jar file but you can deploy it anywhere a java run time is enabled.

structure of an docker image

We can create our own docker images, but you will be able to find the exact same image you wanted to create if you just checked at Docker Hub. Docker tools to manage images, provide a easy way to build, ship and deploy. Just build your own image, push to docker hub and everyone can pull the image to deploy a container.

Furthermore, docker images are extensible. Yes that’s right! you can even extend one of the official docker-image which will have the same attributes of your selected image but with additional settings you put on it. The advantage is reuse of open source code. You don’t have to re-invent the same Docker Image but rather extend an existing image.

Docker Container

Docker container is the actual running piece created from a docker image. The only difference between a docker image and a docker container is a top writable layer. When you create a new container, you add a new, thin, writable layer on top of the underlying stack. This layer is often called the “container layer”. All changes made to the running container — such as writing new files, modifying existing files, and deleting files — are written to this thin writable container layer. But once you delete the container, this top layer will be deleted as well. So it’s not persistent. The best thing with docker is that you can create a docker image using the current docker container with a commit. Hence, enabling us to capture system information and make it immutable so its reproducible anywhere. This solves many of the server related problems we encounter these days.

Dockerfile

Blueprint of a docker image (a text document) is known as Dockerfile. This file contains all the commands you would run in order to build the docker image you want. Docker can build images reading this file, which is one of the key advantages of docker.

#
# Super simple example of a Dockerfile
#
FROM ubuntu:latest
MAINTAINER Andrew Odewahn "odewahn@oreilly.com"

RUN apt-get update
RUN apt-get install -y python python-pip wget
RUN pip install Flask

ADD hello.py /home/hello.py

WORKDIR /home

So putting everything we discussed so far, we first write a Dockerfile which is like the definition of the image. Using the Dockerfile we create a docker image. We then push this image to Docker Hub and provide a unique tag that can be used to identify our image. Using this tag and image name, we can pull the docker image and deploy on another computer as a docker container.

Docker in Action

Docker has so many new concepts and words after sometime you get a headache understanding one another. So lets take a break from the theoretical learning and run some commands and get a feeling of what docker can offer. First and foremost you need to install docker in your machine. You can find clear and precise instructions at their website on how to install docker, depending on your operating system. If you are on windows or Mac, download the docker toolbox and follow the wizard, everything will be setup for you. If your having a linux operating system there is a easy way to install docker, using curl or wget.

 curl -sSL https://get.docker.com/ | sh

or

wget -qO- https://test.docker.com/ | sh
make sure you follow the instructions properly for your operating system

Assuming you have installed docker properly, let’s continue to run some commands. To check if you have docker installed, run

docker version

you should be able to see a output similar to this

Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64

which means you have docker installed in your machine. Lets check what are the running containers in your machine by issuing the following command

docker ps

PS: If the current user is not on the docker group, you will be getting an error like, “docker daemon is not running”. In that case you can add your current user to docker group using

sudo usermod -aG docker $your_user

If everything is working fine, you should be able to see a list of containers running. But if your a beginner, it would be empty. So let’s try running a ubuntu container with just a hello world example.

docker run ubuntu echo "hello-world"

This command entails several other docker commands to serve your desire. The above command says “docker please run a ubuntu container and print hello world”. Before I explain how docker would take care of the rest, let me first briefly explain about docker image name and tag. Basically each docker image has it’s own image name and a tag that uniquely identifies a docker image. For example we have several nginx containers like nginx:alpine, nginx:latest etc. If you have not mentioned a tag by default docker would assume the tag is ‘latest’. So when we request a ubuntu container it would run a ubuntu:latest container.

So now that’s clear, lets see how docker do the necessary to run our command. First docker would check in your local repository if there is a docker image names ubuntu:latest. If there is no such image it would pull the image from Docker Hub or any pre-configured private/public registry. You can confirm this by issuing the following command and observing the images you have in your local machine, once you run the above command.

docker images

Once the image is pulled, docker would create a container and then run the echo command. This would print “hello-world” and exit. Amazing right? You just spin up a ubuntu environment and printed ‘hello world’ in a matter of few seconds. Docker containers are more useful for short-time, small-scale jobs. You can just run the particular command once and get rid of the container.

You can run a container in daemon mode (in background) by indicating at the docker run command. So, let’s try running a Apache container in daemon mode.

docker run -d nginx

This would fetch the ‘latest’ nginx image and initialize a docker container in daemon mode. In order to access the nginx server we first need to know either the name of the docker container or the container id. Each container has a unique name and a id that can be used to issue docker commands. We can name our containers by using ‘- -name’ flag at run command. If we do not specify the name docker will come up with a random but cool name for your docker container. But the container id is automatically generated based on some hash function. So let’s run docker ps and see the name and id of your nginx container. You should be getting a similar screen as below

We have a container named ‘kickass_bose’ (you might have a different name) running with container ID 99654384943a. The container creation date/time and the status (stopped, paused, up), exposed ports and several other information can be observed. This information indicates that the container has 80 and 443 ports opened, which means nginx is runing on these ports. Now in order to access this server we need to get the IPAddress of the container. Each container will have its own docker generated IPAddress which can be used to access from the host system. Using following command we can get this information.

docker inspect kickass_bose | grep IPAddress

Using docker inspect command we can get the full details of this container, the network interfaces, mounted volumes, configuration information etc. As I mentioned earlier docker commands can be used with container name or container id. Here I have used the container name for my container but you can also use container ID as well. But remembering names is pretty easy than some random numbers. Since we are only concerned of the IPAddress of the container we piped it with a regular expression. (You can simply use docker inspect command without pipelining and see what information you can get about your container.) Using the above command you will see the IPAddress assigned to your container, and using that you will be able to access the home page of nginx server through your browser.

We can map one of the host port to any container, so it can be accessed even from remote machines. To do that we have to use the -p flag with HOST_PORT:CONTAINER_PORT. Now the container port should be exposed by the container in order for this to work. Since nginx container by default exposes 80 and 443 port we can use one of that to bind with a host machine port.

docker run -d --name nginxServer -p 8080:80 nginx

There is nothing new in this command, we simply named the container and mapped the two ports. So now you should be able to access your second nginx container simply by accessing http://localhost:8080

Now run docker ps again, you should be able to see two nginx servers running. You can run any number of nginx containers (limited by your computing resources) concurrently with other apache containers, IIS containers etc. Each container can have different settings, different programming languages, different run time environments etc.

The other advantage of docker since its lightweight, the speed of execution. You can start, stop, instantiate, terminate any container in matter of seconds. Let’s stop our second nginx container by issuing the following command

docker stop nginxServer

This will stop the container and no one can access this container. If you try to access http://localhost:8080 again you will get an error page. Now if you run docker ps you will only see one container running. Using the -a flag we can list all the containers in our machine. By default docker ps would only list the active running containers.

docker ps -a

Above command should list the nginxServer container and indicate its status as stopped. You can start the container again by issuing

docker start nginxServer

It would bring up the nginx server in quick time. So, imagine each container contains a web application and you can just switch it on/off the service using above commands. This opens up many possibilities that were not even in our wildest dreams few years back.

Is that all you can do with Docker containers? Nope. This is just one of many use cases. There are several ways you can use docker containers to deploy your desired application. This was just to give an introduction to docker commands, hope it was a worthwhile read. Happy dockering!!