
Last year I joined UXPin DevOps team to work on “collaborate design platform”. Back then there were over a dozen developers hired, and everyone used Vagrant to run our application locally. Virtual machine image was shared via a network drive. But the biggest flaw was that it had no maintainer. When one team of developers was implementing a new feature, they modified their vagrant image, while other teams made different changes in the image. In the result, no one had the fully functional development environment on their computers.
UXPin app at that time consisted of three services: main PHP monolith application and two smaller node services. The company was growing very fast and was planning to hire more engineers. Some steps needed to be taken to ensure developers have the stable and reliable environment for work.
In general Dockerizing existing applications is not an easy task. You can start writing Dockerfile by defining dependencies of your service, remember to take only what’s really required to run this specific application. When moving from a traditional server you shouldn’t need ssh, crons or monitoring agents. Next step is to add all commands needed to build your service from source. I must warn you here — writing a Dockerfile is just a tip of the iceberg. Next step would be to take care of configuration — services running inside containers can’t have hardcoded logging options, public URL for self-reference, or public URL to other services. It should be possible to configure these things when starting container for example via environment variables. Creating image that is independent of tier will later enable you, for example, to easily run many test environments. In a case of multi-service applications, you will need to devote some effort to identify and configure communication between services.
One of my friends already started creating docker based development environment, my first task was to continue his work. The good part was that we had Ansible playbooks to provision our servers for the main application. We used the same rules to create the first container. It wasn’t perfect because a lot of unnecessary packages were installed, but this way we could easily start working. We traded our time for 1.5gb image size. To run our service requires multiple technologies like databases, caches, queue services so from the start we used docker-compose to manage multiple containers.
In our team there are backend developers and web developers, everyone is using Macs. At the beginning starting our docker development environment required devs to run many commands: docker-machine to provision docker host, a lot of git clones to get sources for all projects, compiling JS and CSS code, installing npm and composer modules, docker-compose, and finally database migrations. Because of this, our process was long and liable for mistakes. Not to mention how painful it was to rollout updates. Another thing is that not everyone was familiar with docker. For all those reasons I started writing Makefile which would automate a provisioning of the local environment. Months after creating it I can say it was a good choice. Make tasks are easy to write and flexible, right now every step of our initial process of creating local env is packed into single make task.

When we started, the newest version of docker was 1.8. At the time docker machine caused some serious problems:
- the virtual machine could sometimes crash with untraceable kernel panic,
- when booting VM sometimes docker daemon didn’t start and we had to manually remove /var/lib/docker/network directory,
- killing or restarting VM sometimes caused docker to complain about certificates.
We could not find a solution for the first issue. Developers were frustrated because they just wanted to code, but their environment wasn’t stable enough, crashing few times a day. Partly this was the problem of our applications, which we solved months later. Also, our Makefile was written so it could recover and bring the environment to a stable state (I mean here problems related to docker machine and docker configuration).
Another issue we encountered was very slow npm install execution. Initially, we wanted to run this command inside the container so that devs won’t need to install proper node version locally. Unfortunately, installation was extremely slow, sometimes we had to wait up to 40 minutes to complete. This was probably caused because install command was executed inside volume shared with a host machine. This is known issue in VirtualBox, as the number of files in shared folder increases — performance decreases. During npm install in our main app over 70k files are created. Finally, we decided to install node modules from the host machine.

After half a year officially using docker in the development process we managed to fully contain our infrastructure in Dockerfiles. Currently, our local environment consists of 23 containers, of which 11 are services built by us. We have containers with applications written in PHP, ruby, and node. D0cker allows us to maintain and run them in a unified way. Remaining 12 containers are utilities like databases, caches, queues, and service discovery. Next step: dynamic staging environments.