Docker and Microservices — Why do they make us better computer science engineers? — Part 2

ANIRBAN ROY DAS
8 min readSep 3, 2016

--

If you haven’t already read the Part 1 of this post, then please go ahead and read it. Part 1 talks about what Docker is and what are Microservices and explains about the history of containers and not just Docker, it also talks about why monoliths are not good and what are the things to consider technologically for a microservices architecture like api gateway, service discovery, service registration, database transactions, etc. Go to this link to read about Part 1.

How Docker and Microservices help each other co-exist?

Let’s first see how to implement microservices without docker. In that way you will understand how docker can actually help and make microservices easier to implement. Next we will look into how to implement microservices with docker.

Microservices without Docker

Each individual microservice can be written in any language, one in python, another in ruby, another in go, and yet another in node. So we have a polyglot microservices architecture.

Now the basis of microservices is to reap the benefits of the ease of development and manage scalability in an easier way. Well, ease of development is established as you can code in any number of languages you want. Keep the source code in different repository and you are good to go. Now when it comes to easing the scalability part, you need high scalability and fault tolerance. Also some of the microservices would need some persistence layer like a DB using Mysql or Mongodb, or a cache using Redis or a message queue using RabbitMQ or Kafka which needs to persist some data.

Production Environment

Well, what we can do is start each service in different VMs or if you want to mix match the services across the VMs, you can still do it, but then you have to install and configure the dependencies of each service in all the VMs. But if you keep one VM per service than at least you can manage the dependencies better and don’t have to repeatedly install and configure dependencies in all the VMs individually.

Now after installing and configuring the dependencies in each of the VM, you start the services in their corresponding VMs. You use a service discovery microservice in one separate VM and all the other VMs having different services running on them will talk to this separate service discovery microservice via the address of this separate VM so that you don’t have to worry about the locations/addresses of all the different microservices.

Now what about scaling? If you want to scale a particular service, then you can start another process of the service or multiple processes of the service you want to scale in the same VM where it initially is running on. In this way you can have multiple instances of the service and all you have to do is use a load balancer that is going to load balance requests among these replicated services.

So, that means, we have to use a load balancer per service, or may be we can have a global load balancer which points to all the services and their instances present in all the VMs combined.

Now if you don’t have much resources left in a single VM, then you need to start a new VM, install and configure dependencies required by the service you want to scale up in the new VM. Then you can start a single or multiple instances of the service in that VM and later modify the load balancer to point to those services too.

To help in automating these steps, you can use configuration management tools like Chef, Puppet, Ansible, Saltstack along with some handy shell scripts to automate the installation and configuration of the dependencies of the individual services.

Well, looks like we can make it work with some good networking skills and by successfully setting up the service discovery microservice and the load balancer microservice.

Development Environment

Till now we have seen how to make microservices work without docker in a production environment and how to use configuration management tools and some shell scrips to automate the process and make it work.

Let’s see if we can or how we can set it up in a development environment, and what I mean here by development environment is the developer’s local machine.

Since we cannot have multiple VMs in our development machine (Actually, we can, we can use virtualbox and use vagrant to fire up multiple VMs in our local machine itself, but if we have many microservices and we start one VM per microservice, we will run out of resources and thus its not a good idea to do it this way), so we will install dependencies for all the services together in our local machine itself.

Problems we may face are conflicts in versions of dependencies, there may be clash of similar dependencies’ versions used by different services. We can get around it by using some version management tool for each dependency but its a hard work but we can somehow make it work. And then we can follow the same process. Have all the services run in the local machine itself, including proper load balancer service and service discovery microservice.

Microservices with Docker

Image Source

We looked into how to setup a microservices architecture without using Docker. Now to convince you that Docker drives the microservices movement and is one of main reason for the fast adoption of microsercies by everyone in the industry, I only have to show you how you can do all the things I mentioned in the above section with much less time, much less code, much less overhead and in a much user friendly way.

Okay.

Development Environment

Step 1

Install Docker (Engine + Cli), Docker Machine, Docker Compose

Step 2

Write Dockerfile for each of your microservices and commit to your code repository.

Step 3

Write docker-compose.yml and docker-compose.prod.yml file for each of your microservices and commit.

Step 4

Start individual services by running either of the two commands once for each service from the root of their source code directory, preferable keeping both Dockerfile and docker-compose.yml files in the project root.

  • docker build -t service_image_name:sometag . && docker run --name service_name -d service_image_name:sometag
  • docker compose up -d

Buddy! you are done. Impressed already? But still not convinced entirely? Wait till you read the Production Environment section.

Production Environment

Step 1

Install Docker (Engine + Cli), Docker Machine, Docker Compose in all the VMs (You can do it using Docker Machine, single line command, loop for the number of VMs)

for i in seq(1 5); do docker-machine create --driver amazonec2 \ --opt=mention_some_options \ node-0"$i" done

Step 2

Write Dockerfile for each of your microservices and commit to your code repository.

Step 3

Write docker-compose.yml and docker-compose.prod.yml file for each of your microservices and commit.

Step 4

Start individual services by running either of the two commands once for each service from the root of their source code directory, preferable keeping both Dockerfile and docker-compose.yml files in the project root.

  • docker build -t service_image_name:sometag . && docker run --name service_name -d service_image_name:sometag
  • docker compose -f docker-compose.prod.yml up -d

The only differences here are in step 1 and step 4 (option 2). In step 1 you create multiple VMs that too in a cloud provider for example in AWS in this case, by running one shell loop command. And in step 4 (option 2), you just add another flag of -f which uses the docker-compose.prod.yml file instead of the default docker-compose.yml file. The difference in these two files are that the production version of the file has some environment variables pertaining to production environment including confidential data.

So if you are still not convinced that why docker movement actually acted as a catalyst for the microservices movement, then either I have very bad communication skills and suck at explaining or writing blogs or you have some serious issues with the docker guys, or may be docker screwed you in some other way and now you just don’t like it anymore, yes, some people have got some bad experiences with docker too. But thats just few people with some bad docker bugs and I guess docker updates would solve those issues.

Now let me explain in detail, what just changed while using docker to setup microservcies architecture.

  • Parity, Parity, Parity! — There was close to no changes you had to make to run docker and microservices in a development environment and production environment. We got parity unlike the without docker method where in production you had to do a lot of extra things and in development you had to do less things but you had other problems.
  • Dependency management and isolation — You did not have to install and configure tons of softwares and libraries to run your services, you just had to write a Dockerfile and a docker-compose.yml file. thats it. But in the without docker method, there were so many dependency management you had to do. And in development environment there were so many problems regarding conflicts of the dependencies. All those problems just went away.
  • Ease of use — In the without docker method, there were so many shell scripts, configuration management tools and commands you had to use. But using docker, you just had to use a handful of commands, 2 to 3 actually, to run your services in both development and in production environment. How cool is that?
  • Scaling — It is a production environment thing, so without docker, you had to create a completely new VM and do all the steps of installation and configuration every single time to run your services while scaling. But by using docker, all you have to do is use the docker-compose scale service=scaling_factor command or run the docker run command multiple times. You are good to go. Even if you want to create a new VM first , still then you have to just run one command to create the VM, one to scale up. you are done.

I think we will stop here. Enough of running docker and microservices for one day. Now go ahead and try out those commands, if you don’t know about docker yet, then go learn about it, get used to it. Its going to be your swiss army knife for everything in coming years. Its going to ease your development and make you a good developer and also give you the DevOps flavour.

We will talk about how docker and microservices make us better computer science engineers in the next and final part of the series. Here is the link to Part 3 of the series.

Link to Part 3

--

--

ANIRBAN ROY DAS

I believe in knights and 50 other things. An observer, listener, storyteller, make believer and writes colourful texts on a dark background for a living.