Deploying Cloud Services & Applications Properly With Docker & Docker compose

Naren Yellavula
Dev bits
Published in
8 min readOct 7, 2017

--

TL;PDR(Too lengthy, Please Do Read) I habituated to write lengthy articles! Please jump to the practical second section below if you go into sound sleep when you see lengthy texts

Hi, engineers. You all might be hearing things like docker and docker compose in the world of DevOps. I got a chance to look at the better ways to deploy our production web services. In this concise article, you are going to take the technique of running multiple production containers using Docker. Docker is a virtualization software that allows us to run multiple Operating Systems on a single machine. Docker creates a virtual layer of abstraction which runs independent containers on a single Linux machine. We are gifted with tools in this modern era and we should utilize them to deliver services seamlessly.

Note: I also wrote a programming book. If you are a software developer by chance, please do check it out.

PART — 1 (Know the existing things)

Traditional approach

In the traditional monolithic cloud architecture, we tie everything inside a bigger box and deliver to the customer over the cloud. Because of the explosion of internet availability, irrespective of size of customer base, companies are aligning their strategies to the cloud. In simple words, they are moving to (SAAS) subscription-based model instead of delivering the application directly. This benefits customer to stick with the service until the experience is good. For companies, this gives a lot of scope in adding features swiftly and that in turn boosts the innovation. Cloud model made companies be more productive and agile. Engineers design and develop a cloud service. When it comes to deploying that service operation engineers chose a cloud VPS(Virtual Private Server). They install everything(App server, Database, Proxy Server) on one single box(traditionally) and try to serve customers from there. This approach has a lot of drawbacks(some flexibility perks though) which is the reason for the emergence of Microservices.

In old approach, these pieces are installed on a VPS.

  • Application Server (Node JS, Java or Python)
  • Proxy Server (Apache, Nginx)
  • Cache Server (Redis, Memcached)
  • Database Server(MySQL, PostgreSQL and MongoDB etc)

This approach is not preferred because of the advent of automation and Continuous Integration/Continuous Deployment. We can also capture a snapshot of a given environment to reduce the risks of overstepping into the wrong set of conditions on deploying services.

The concept of micro-services tells to separate the tightly coupled logic and deploy them separately. It means from the above block figure, the application server can be chunked more into independent pieces those talks to each other using HTTP or RPC. It doesn’t mean you need to choose x number of VPS instances to run your services. Containers provide a nice way to simulate isolation within the same machine or server. Docker provides that containerization. If you wrote a service and planning to deploy it on AWS EC2 or any cloud VPS, don’t deploy your stuff as a single big chunk. Instead, run that distributed code in the containers. We are going to see how to containerize our deployment using Docker and Docker Compose.

PART — 2 (Practical Section)

In this section we are going to do three things:

  1. Create a Web service with minimal health check API
  2. Use Nginx to proxy the above web service
  3. Containerize them and deploy

Source code for this project is available here.

For the first step, I am using NodeJS and Express to create a simple health check API. For the second step, we are going to use the Nginx image from the docker hub. For the third step, we are using Docker compose to create services and launch the cluster of containers. Instead of Node JS app, one can think Java Spring application or Django with Gunicorn running on the same port.

If you are new to Docker, please go through the documentation on the official website, they have tons of content on it. I will explain about the docker-compose briefly along with installation.

Why Nginx(or Apache)?

If you are developing a web application(For ex: SPA), you need to write the UI and backend service separately. When you test it, you will run into the CORS(Cross Origin Requests) problem. To overcome you set CORS headers to play nice with incoming requests from different origin. But in production, that approach is strictly discouraged due to security reasons. What should we do? We need to proxy the requests to different port using a proxy server. When we use a proxy server to serve both static files and dynamic services, virtual same origin can be achieved. That negates the CORS.

Prerequisites

Make sure you installed Node JS, NPM, and Docker on the host machine(most probably custom Linux or EC2). There are many good installation guides available on the web. Docker compose is a wonderful tool. Install docker compose using these commands.

Hoping we have all above software installed, let us proceed with our illustration. Create a new directory called webService on a Linux machine.

$ mkdir ~/webService

This directory holds the information about our docker containers, source code etc. Create two new directories one for our app service and other for nginx.

$ cd ~/webService
$ mkdir nginx app

Now app will hold the logic for our service. You can also develop that in another place(Git cloned path)and can copy it here before building.

Go inside the app directory and add below files.

Any node project is started by initializing the package.json. Create a package.json like this.

app/package.json

Now let us create the source code for server in app directory

app/server.js

If you see, we are creating a simple express service with a health check endpoint. Now create a Docker file corresponding to this project.

app/Dockerfile

This Dockerfile basically tells that:

  • Fetch node docker image
  • Create a directory called /usr/src/app and set current working directory to that
  • Copy package.json from current directory(host) into above working directory
  • Run npm install to fetch related node modules(here it is only express)
  • Copy the source code (all directories and files in current directory)
  • Then start the server on 8080

Now our app(web service) is ready. Let us create the nginx container information. Come out of app directory and enter into nginx inside webService.

Create a nginx configuration called default.conf. We copy this to override the default configuration that is going to be created inside the container.

nginx/default.conf

We are pointing nginx to create an upstream service and use it to proxy all requests incoming to /api/v1 to forward to our Node web service. But wait! what is app:8080 in the upstream block. It is the service name we are going to create in the docker-compose file later. This Nginx is one service(container) that talks to app service to forward and accept requests. So we need to know which container exactly we are going to talk to.

Let us create a directory for our public resources like index.html.

$ mkdir nginx/html
$ vi html/index.html

And add this content to it.

nginx/html/index.html

It is a simple HTML file that on loading makes an API call to health check service and displays response on the web page.

Now add the Dockerfile to nginx directory similar to the above Node app to do few custom things. Here we copy the contents of html directory to the /usr/share/nginx/html and also copy the configuration file default.conf to the /etc/nginx/conf.d/

nginx/Dockerfile

Now we have everything we need. Create the docker-compose.yaml file to instruct the docker-compose to build and launch containers using our Dockerfiles.

./docker-compose.yaml

There is a lot to discuss this yaml file. We are creating two services called nginx and app. Since Nginx need to forward the 80 port we are adding ports command. build command picks up the Dockerfile from given directory. networks command tells which custom network a service uses. We can create a custom network(here mynetwork) using networks command. We are adding the driver as a bridge means the services under this network can talk to each other. Verion “2” tells that this is second type of YAML syntax. docker-compose has Version “1” before.

What actually Docker Network means?

By default, all the containers we create will fall under the same Internal IP range(Subnet). Docker networking allows us to create custom networks with additional properties like automatic DNS resolution etc. In the above YAML file, we are creating a network called mynetwork. The services(containers) app and nginx will lie in the same subnet and can communicate to each other without the need of exposing the web service container to the outside world. In this way, we can make a single entry point to our web service that is through the Nginx service. If anyone tries to access app service directly they cannot do it because it is hidden. This actually secures our application.

Client 2 cannot reach app server directly

Create custom networks to control many network related aspects (IPAM, Static IP, DNS etc) of Docker containers

Let us see how the directory structure ended up.

.
├── app
│ ├── Dockerfile
│ ├── package.json
│ └── server.js
├── docker-compose.yaml
└── nginx
├── default.conf
├── Dockerfile
└── html
└── index.html

Now run the docker-compose command to build the docker images first and then launch containers. Do this from webService directory.

docker-compose build

It picks the docker-compose.yaml in the current directory and tries to pull and build docker images by running “docker build” on Dockerfile in each service. It spits a lengthy log on to the console. Once this operation is successful we can up the services(containers) using

docker-compose up

Now visit the IP of host that running docker. It can be localhost or EC2 VPS.

This page actually requests the /api/v1/healthcheck which first hits Nginx service. Nginx service then proxies that request to app service and gets the result back.

We can also access the service using the API

$ curl http://localhost/api/v1/healthcheck"2017-10-07T05:39:51.408Z"

But we cannot do this

$ curl http://localhost:8080/api/v1/healthcheck

because we are not forwarding the port of app service container to outside world. This actually enforces requests to go only through Nginx(Single Point of Entry). If we have a container that runs database server, that also can be added to the bridged network(here mynetwork) and not exposed to the outside world.

If you make any changes to the nginx configuration or source code, just run docker-compose’s build command once again to update the containers.

Important thing***

Long back we added this line in the nginx configuration file.

upstream service {     server app:8080;}

Since nginx and app both are bridged using mynetwork, one can access another by the service name. So DNS is already taken care by docker. If this privilege is not available, we need to hard code IP in Nginx configuration file or assign a static IP from the subnet in the docker -compose.yaml file. This is a wonderful thing about docker networking.

Now our service as well as static files are up and running securely.

The entire source code for this project is available here. https://github.com/narenaryan/DockerCompose

Thanks for reading! Hope you are not sleeping :) You can reach me here

--

--