Using docker in development environment

This article describes our experience with docker used in development environments. I point some advantages of using docker in that way, some problems we met and benefits we received. This is not the comprehensive guide on using docker in a development environment, but this article could be useful for software architects or team leads.

Introduction

Today, at the morning, after reading my email and reviewing changes in the issue tracker I did my usual ritual:

gfp # which is alias for “git fetch --prune”
make up # wait several minutes

I waited while everything was updated, recompiled, actual version of the project I working on launched and started to work on issues I spotted in task tracker.

After lunch, a colleague asked me to help to solve some specific issue on a project I didn’t touch yet. I quickly managed to bring it up on my machine and help him with an issue:

gcl git@bitbucket.org/.../secret.git # gcl is alias for “git clone”
make up # wait 15 minutes

Dependency hell

When the company I worked in started to growth and hire a lot of developers remotely it became obvious that we need a standardization. We hit issues with people could spend day or two just to launch our complicated solutions, spending not only own time but the time of other developers helping them.

The problem was not only with the specific technology but with the wide range of different tools and languages with used. For example, we could use RoR for a backend, several microservices written with Go for a queue processing, NodeJS for an assets management and compilation, and some specific software like Cassandra or RabbitMQ.

When people tried to launch of all that stuff (especially if they were new to some part of our stack) it could be real torture to them because they need to do “npm install” here and there, “go get .” and “go build” in several other folders, launch Cassandra and then launch workers with specific arguments. And all of this before they even can start to study a project.

Docker, docker-compose

At some point, I decided to stop all this madness and took responsibility to fix it. First of all, I’ve dockerized all software we used and created the docker-compose file for each project. So in order to launch a project (if developer had all vendor dependencies installed) and start closing issues he just hit:

docker-compose up

There were issues with docker adoption in the team. Somebody argued, somebody silently suffered, somebody just weren’t able to install docker for some reason. Also, we hit IO performance issues for developers who worked on MacOS (luckily, we fixed it with the help of some curlbash from github: https://github.com/adlogix/docker-machine-nfs). We hit compatibility issues with some version of docker and ubuntu. We hit “$PATH” issues, docker update issues and my aother.

I just had as much patience as I could and waited, helping to fix most critical docker issues meanwhile. We also tracked all issues we met and solutions in our documentation repository. After some time I noticed that situation become better: everybody installed docker and docker-compose and got used to it. I didn’t receive docker support requests and I stopped to receive “I can not install/build this software” requests either. Everything just worked as it supposed to be and as docker promised.

Make

At this time I started to look for a way to help developers in our company to switch between projects because the rotation was quite common and each developer switched a project once per one or two months.

I wanted to offer some simple solution for a team, without force them to study yet another technology (like ansible). I even implemented the lightweight wrapper around shell scripts (which we named Darius, https://github.com/idfly/darius), but soon realized that it was a mistake and I invented yet another stack to learn. So I reviewed my options again and switched to Make.

Some developers were really sad about this decision because they started to get used to Darius and it’s fancy log output. And it is still controversial decision even now, when everything is already working. But, taking into consideration the common sense, it was right decision to switch to make: it is everywhere, it is old, it is bug-free, it has a lot of people working with it.

I rewrote all our launch-up scripts to Makefiles (which now contained vendor installation, initial compilation and migrations) and updated “Installation” section of readme of all the projects with work with to:

Installation
------------
* Clone repo
* Install make, docker, and docker-compose
* Run `make up`
* Open "http://localhost" in browser

Soon developers started to use this approach not only to deploy a project locally but just to start it after significant “git pull” because it became very convenient to apply all updates of vendor files (Gemfile, package.json, composer.json and etc.) and migrations.

Conclusion

Looking back, I glad that we used docker to setup our development environment. However, like any other, this way has own disadvantages and own entry barrier. I spent a lot of time getting using to docker environment and hitting some common issues like “multiple processes per container”, file permissions, giant dockerfiles and other. But despite that problems, I highly recommend every developer to acquire with docker and see which problems it could help to solve.