Docker

Chathushka Hansani
Aeturnum
Published in
8 min readFeb 23, 2018

This is the second blog of mine which is related to Docker, and to get some insight on Docker and the evaluation from physical servers to containers, please refer my previous blog. Please find the below link for my first blog,

Let’s get to know more about docker. :)

As I’ve explained in my previous blog, docker is a product/software, but not a concept.

Dockers always run on top of the Docker Engine.

Image [1]: Use of Docker.

Docker Engine is not the docker project, it is the core which carries all the configurations within.

  • Orchestration
  • Registry
  • Security
  • Services are build on top of/around the docker engine.

Let’s go with Registry and Orchestration.

Registry

Registry is a place where you can store your docker images, e.g.: assume of creating a new docker image or customizing an existing docker, then can push the image back to a registry for later use when you need them.

Docker Hub is a well known, and the largest registry which is in use nowadays.

Docker Hub registry allows both public and private repositories to save docker images for later use.

  • Use of Public repositories will allow anyone who can access Docker Hub, to pull docker images from there, and only allow access granted people to commit docker images into the repository.
  • Use of Private Repositories will allow limited access for users and the owner controls the repository.

You can have a local registry of docker by using a framework which supports the docker registry.

Orchestration

  • Simple answer for the question “What is Orchestration?” would be “Orchestration is a process which takes all docker instances together, and go for a common goal, i.e.: Http container , oAuth container, services”.
  • Orchestration process comes forward, when we going to deploy the services using docker images. Actually Orchestration is the process which define all the behaviors of these docker containers, e.g.: where should these containers go, behavior of the containers, dependencies between each container, and what container should come first and what should come then.
  • “Kubernetes” is one of the tool/soft-ware platform, which was designed to handle the orchestration process.

There are lots of wrong concepts and beliefs around docker. Find few below to understand docker correctly,

  • Some says dockers are not persistence, but not. Docker by nature is persistence. whatever changes done using the docker will remain even when you shutdown/start the docker, e.g.: folders made, and the files created using the docker will remain, but whatever the changes did to the docker (installing s/w to docker) won’t exists.
    (for databases it was advised to use an external storage environment but even on the same storage its fine).
  • Some says the number of containers to use should be equal to the number of cores in the processor which uses to run those containers, but not. Can create dockers more than the cores in the processor, if not the entire container/docker technology would be wrong.
  • Some says docker is only ideal for new applications, but not for old written applications, well this is 50% true and 50% false.

Let’s take a look at the last point, Docker for new applications, and Docker for old written applications.

If we hoping to use docker for our services, it is always better to go with micro-level service architecture. If we use micro-level service architecture, then we can run separate independent container for each service, as containerization leads to have self healing capabilities for your application, i.e.: it can redirect traffic by spinning out a new container of the same container if you have more traffic on the first container. To get the full featured containerization, the application should fit into an architecture which is compatible to docker.

So the saying “Old applications does not fit into containerization” is halfway true, but it may support. Technically there are no limitations for containerization but, the reason is as these old applications were developed to use physical box or Virtual Machine, your application may be bundle everything together, e.g.: http module, process, orchestration, etc. according to that architecture. Since the old written applications did not match to the containerization architecture, the better option would be to rewrite or redesign them into micro-level service architecture.

OCI (Open Container Initiative)

Image [2]

Many companies developed products to the concept “Containerization”, e.g.: Docker, RKT(Rocket) products. All these products should go with common specifications as all of them were developed for a common concept “Containerization”.

OCI (Open Container Initiative) define the specifications and govern the container development. All the containers in a proper developed application are complying with “OCI standards”.

In order to bring out “OCI Standards” into every container in an application it uses the Docker Engine. Because of that the Docker Engine is decoupled. They moved all of the platform specific codes into the Docker engine to manage the specifications of container development.

The purpose of implementing OCI standards is that “the containerize applications should not lock into one vendor or one platform” concept.

Install docker on your machine

Dockers are available for many platforms like Linux, MacOS, Windows, etc.

When it comes to Linux and macOS, the process of installing docker would be almost same. we can directly install docker on both of them.
And also for Windows 10, we can install docker directly, if not need the Docker Toolbox to install docker for Windows.

Steps to get Docker for Windows

  • Google “Docker Toolbox”.
Image [3]

This will install Docker Toolbox along with virtual box into your machine and also will install Linux Image to your virtual box which happens automatically.
After that when ever you run a docker command on terminal, it passes that command to your virtual machine and then the Linux images inside the Virtual machine will process that command and give the response.

Now let’s create Docker :)

To create a docker we need to have a docker configuration file, Dockerfile.

“Dockefile” is the standard name to create a docker file. We can configure commands in this Dockerfile, i.e.: what we do using this Dockerfile, and the things happen after running this docker.

When creating a Dockerfile the “FROM” command configure to download the particular image from the repository. After the “:”, it mentions the version or the latest, e.g.: “FROM ubuntu:latest” or “FROM ubuntu:16.04"

Let’s see some of the Docker commands

Before any we needs to install docker. The command would be,

  • “sudo apt install docker.io”
Image [4] : installing docker using a terminal

Then needs to create a Dockerfile. Command to create a Dockefile,

  • “touch Dockerfile”

Edit the created Dockerfile to configure the Dockerfile by writing commands,

  • “vim Dockerfile”
Image [5]

After the command “vim Dockerfile”, you will get an insert-able terminal. Press the “Insert” key in the keyboard, and type the commands.

Image [6]

“FROM ubuntu:latest” : means download the latest ubuntu image from the repository.

“CMD [“echo”, “hello docker”] : means print the “hello docker” sentence after running this docker. “CMD” means command. Here the command configured inside an array within “[]” 0th index- “echo”, 1st index- “hello docker”.

This Dockerfile will download the latest version of ubuntu and print the “hello docker” after running the docker.

After creating the Dockerfile, needs to build the docker. Command to build the Docker image,

  • “docker build <pathToDockerfile>”
Image [7] : Command to build a docker.

This command will download the ubuntu latest version as shown in the Image [7].

When building the docker we can give a name also, so we can use that word instead of the hash generated.

  • “docker build -t <docker name> <location of docker>”

“-t” means tagging, and “.” period sign for the <location of docker>, represents the location of docker.

When downloading, it will download the image which requested, then will creates an intermediate container, and then will create the right container we request, and then remove the created intermediate container. Please refer the Image [8] given below.

Image [8]

Command to view all the docker images,

  • “docker images”
Image [9]

Command to run the docker,

  • “docker run — — name <docker_name> <image id of the created docker>”
Image [10]

Docker run command for the above mentioned docker is “hello docker”.

Command to list down all of the currently running images.

  • “docker ps -a”
Image [11]

To remove a running image, first need to stop that running image and then remove it. Commands for those two steps,

  • “docker stop <running image name>”
  • “docker rm <running image name>”
Image [12]

Then the running image gets stopped and removed, and gets out of the running image list.

Image [13]

After stopping and removing ui-aggregator docker, it won’t appear in the running dockers list which taken from the command “docker ps -a”. Then the ui-aggregator is a stopped container.

Image [14]

Above are some of the most common and basic commands of docker, which would know when you start to learn about dockers. There are more commands and more things to learn about docker yet.

Hope you were able to gain some knowledge on Docker commands and more details about docker from my second blog.

Wait for my next blog too :).

--

--