Docker in a Nutshell

What is Docker?

Emre Ceylan
Devjam
10 min readApr 17, 2020

--

Docker is basically a container management engine which uses Linux Kernel features like namespaces and control groups to create containers on top of an operating system and automates application deployment on the container. In other words, it is an open platform for developers and system admins to build, ship and run containerized applications. Build, ship and run are basic keywords which will be visited again later.

Containers

A container is a standard unit of software that packages up your code and all its dependencies so that the application runs quickly and reliably from one computing environment to another. It’s an abstraction at the application layer that packages code and dependencies together. In terms of Docker, containers are instances of Docker images that can be run using the Docker run command. It would not be wrong to say “The basic purpose of Docker is to run containers. [link]”

A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, run-time, system tools, system libraries and settings.

Container images become containers at run-time and in the case of Docker containers; images become containers when they run on Docker Engine.

We will talk more about Docker images later on.

Containers vs Virtual Machines

Virtual machines (VM) are managed by a hyper-visor and utilize VM hardware, while container systems provide operating system services from the underlying host and isolate the applications using virtual-memory hardware.

In one sentence, a VM provides an abstract machine that uses device drivers targeting the abstract machine, while a container provides an abstract OS.

Containers provide a way to virtualize an OS so that multiple workloads can run on a single OS instance. With VMs, the hardware is being virtualized to run multiple OS instances.

Sharing OS resources such as libraries reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. Containers are very light, they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are much larger than an equivalent container.

Containers’ speed, agility, and portability make them a handy tool to boost the software development.

https://www.docker.com/what-container#/package_software
Source

Dockerfile and Docker Images

A Dockerfile is a file that you create which in turn produces a Docker image when you build it. It is a recipe (or blueprint) for building Docker images, and the act of running a separate build command produces the Docker image from that recipe.

There are certain commands (See here) which are run sequentially in a Dockerfile to create an image. It has a layered structure, which contains commands, libraries to be used and dependencies.

Some layers may exist in more Docker projects, so a layered approach guarantees the reuse of what is already downloaded.

A Docker image is the result of the execution of processes described in the Docker file. You may think of an image as a template created by the Docker file. In other words, images would be called something like “snapshots.” They’re a picture of a Docker virtual machine at a specific point in time. Docker images can’t ever change. Once you’ve made one, it can be deleted, but not modified.

Images could be directly built from Dockerfiles or they can be pushed or pulled from Docker Hub. Docker Hub has official images of e.g. Ubuntu, NginX, MongoDB, NodeJs, Postgres and so on..

The act of running a Docker image creates a Docker container (which was discussed in the sections above), so the container is the running instance of the image. Docker Daemon is a process which is used to control and manage the containers.​

Then; “build, ship and run” which were mentioned in the beginning matches with our 3 pillars respectively:

  • Dockerfiles are built
  • Images are shipped
  • Containers are run

In conclusion, a container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, run-time, system tools, system libraries, settings.

So, the software that we develop is going to be independent from the host operating system, any libraries or run-times present in host OS.

The problem Docker solves

Most of the modern applications have similar setups. They all use a combination of different technologies to build up a complete application functionality. A typical example would be an app that uses a blend of the following services:

  • A Spring Boot app as Backend
  • Node.js for Web Server
  • Angular for Frontend
  • Postgres as a database

An application needs to run in an environment, which could be a local development, test or a production environment. Since the environments could be different in OS, version, hardware etc, it’s clear that the application and its technologies with their respective versions need to work the same on different environments.

Without docker, each environment that the application runs on (dev, test etc.) would have to be configured with the correct versions of these services so that application can run properly. This means, we would probably face many compatibility problems.

Therefore, Docker mainly solves these kind of problems like

  • missing or incorrect application dependencies such as libraries, interpreters, code/binaries.
  • conflicts between programs running on the same computer such as library dependencies or ports.
  • limiting the amount of resources (CPU, Memory..) an application can use.
  • packaging and isolation, and also helps rapid dynamic scaling in a micro-service environment

Each service has and can manage its required OS dependencies for itself, bundled and isolated in its own container.

Moreover, to make things easier for developers, there are already hundreds of ready docker images with different environments in the official Docker repository. For example, if we need a Postgres DB for local app development, we can just pull a ready Postgres image with the version we need.

Deployment of web applications with Docker containers

As we have mentioned, typical applications consist of a mixture of multiple components such as backend, frontend, database, web server or another stand-alone app dependency etc.. In order to manage a full-fledged application, we need to create and maintain many separate Dockerfiles. Here, docker-compose comes to the rescue. With the help of docker-compose, we can define a multi-container application (all complex stack) in one single file and run it with a single command.

  • docker-compose up

Let’s also add the official definition from Docker: “Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.”

Please see compose reference for a complete guide to docker-compose.yml file.

In the next section, we will see how to compose a sample Spring boot application together with a Postgres database and run both of them on a Docker host.

Containerized Sample Spring Boot App with Postgres DB

First, make sure Docker is installed on your machine, if not please get Docker from here or Docker toolbox here.

You may want to visit this link to see how a stand-alone Spring boot application is built with Docker with a simple Dockerfile. Then it is as simple as this to run a containerized application:

  • $ docker run -p 8080:8080 -t my-image-name

PS: You can check the Docker run reference here.

We will do something similar but a bit more advanced, the application will retrieve data from a database and we will configure both of them together with docker-compose.

Our sample Spring-boot application is a basic REST service which retrieves the list of users stored in a DB. User is a simple object which has an Id, first name and last name.

We will use a Postgres database to link our application with the data to be shown. I will skip the parts for the controller, service, repository etc. since these are not directly related to Docker. You may find the source code of complete implementation here:

You will find out that Liquibase is configured to auto create the users table and insert a few sample data, so we will not need to create them manually.

Our Dockerfile looks like this:

FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

This is the same as Spring’s getting started sample, it uses an openjdk image, then copies the fat jar as app.jar and defines it as the entry point. We will not build this, instead this will be done by the docker compose.

Let’s check our docker-compose.yml:

version: '3.1'

services:
postgres_db:
container_name: postgres_db
image: postgres:latest
environment:
- POSTGRES_PASSWORD=root
- POSTGRES_USER=root
- POSTGRES_DB=users-db
ports:
- 5432:5432
my-webapp:
container_name: app-springboot-postgresql
image: app-springboot-postgresql
build: ./
depends_on:
- postgres_db
ports:
- 8088:8088

We have a section called services where we define the containers that we would like to compose. In the first service, we define the Postgres database as postgres_db. The base image for this service will be postgres:latest. Then, we define the environment variables for this Postgres service which are needed to run it, such as database name (users-db), username and password (both are root for this example).

Lastly, we define the ports to access this container from the outside world, we bind the 5432 port of the container to the same port of the Docker host.

Secondly, we define our Spring boot application as a service, it is called my-webapp as the service name in docker-compose.yml file and app-springboot-postgresql will be the container name. The build property is either a string containing a path to the build context or could be an object with the path specified under context. Docker will look for the Dockerfile to build the application in this path. Similarly to the DB configuration we define the port to access our application, Container’s 8088 port will be bound to the same port of the Docker host, since our application is running on the internal Tomcat server’s 8088 port. (We will see in application.yml)

There is an important property here which is ‘depends on’. It says the Spring boot application depends on the postgres db. Therefore, postgres is needed to run our application, docker-compose will start (and stop) services in the dependency order and create (and start) the dependent services if they are not already running.

Last but not least, we will have a look at our application.yml of the Spring boot app:

spring:
application:
name: docker-springboot-demo
jpa:
generate-ddl: false
hibernate.ddl-auto: none
database-platform: org.hibernate.dialect.PostgreSQLDialect
properties:
hibernate.generate_statistics: false
hibernate.show_sql: false
datasource:
url: jdbc:postgresql://postgres_db:5432/users-db?reWriteBatchedInserts=true
username: root
password: root
driver-class-name: org.postgresql.Driver
liquibase:
change-log: classpath:db.changelog/db.changelog-master.xml
enabled: true

server:
port: 8088

This is pretty standard but there are two important points to be aware.

We have defined our server port as 8088 hence this was bound to the same port of the host in the docker-compose.yml file.

If you look at the url of the datasource, the database hostname is defined as postgres_db. Remember? This is the service name of the postgres container. We don’t need to define the IP of the Postgres container, instead we just put the service name here to be able to access the database.

To run our application as a Docker container:

  • Build the Spring Boot fat jar with “mvn clean install”
  • Run “docker-compose build --no-cache my-webapp” to reflect any changes in the app to the image, otherwise “docker-compose up” will not update the image once it’s built.
  • Run “docker-compose up” (--detach)

That’s it! Our application is up and running in a Docker container which is connected to a Postgres DB running in a Docker container as well, and we have configured both of them together, easy, right?

When you run “docker ps -a” you should see this:

Now, navigate to the following Url in your browser to see if everything is working:

You can find out your host IP via this command: “docker-machine ip”.

It might be “localhost” if you are on Linux or running the Docker Desktop on a Mac or Windows. If you are running Docker Toolbox on a Virtual Machine it will probably be 192.168.99.100.

Conclusion

We have briefly explained Docker and containers, what we use them for, the basic terminology and we have built a sample containerized Spring boot application with Postgres DB.

We called the GET endpoint of our User REST controller, which brought a JSON formatted list of users from Postgres DB and all happened in Docker containers!

Thank you for taking time reading this post and hope you find it helpful! Do you have any comments or questions? Please feel free to comment on this story!

I work at Sytac.io; We are a consulting company in the Netherlands, we employ around ~100 developers across the country at A-grade companies like KLM, ING, ABN-AMRO, TMG, Ahold Delhaize, and KPMG. Together with the community, we run DevJam, check it out and subscribe if you want to read more stories like this one. Alternatively, look at our job offers if you are seeking a great job!

--

--