Continuous Delivery at QLECTOR (6/5): going a step further!

Continuous delivery follows some principles inherent to lean: continuous improvement by removing all kind of waste — in our case by automating repetitive tasks and focusing only on tasks that drive value — those that require human creativity and skill. Can we build a dev environment to achieve this?

By following this lean principle, we identified features such a platform should satisfy:

  • the best scenario we can aim for would be a zero knowledge setup for anyone working on the project: just by cloning the code, opening an editor or IDE and executing some script we should have everything ready to develop, view the app and get feedback. This should not take longer than a minute or two :)
  • the environment should not impose constraints to the developers: everyone should be able to work on the OS and IDE of choice, removing unnecessary learning curves
  • we should replicate the same environment as in production to make sure no other issues arise outside from those that may happen in production in same conditions. By doing so, we standardize running OS, dependencies (at OS and application packages), components location, encodings as well as user and permissions
  • we should prevent issues due to stale conditions on developer side as may be stale packages, configurations or files. Sometimes this makes things work locally but are impossible to replicate on other machines
  • make it configurable: the developer may decide if all or some modules and services should be running and how: as usual, by having changes being watched and recompile, lint and / or test code on changes? When starting the environment, shall we start from where we left or ex.: recreate the database? shall we recreate the container?
  • provide configurations, certificates and credentials defaults, so that the developer does not need to worry about them until some specific change is required
  • provide tools and means to ease debugging

We achieved this by using a Docker images hierarchy that provide us the same environment for development, CI as well as production and enable to start all services as defined in docker compose. In development we mirror code from git repository into the container, running the modules inside it, while the production image is created from the same base image but persisting released binaries into it. To avoid mismatches in docker-compose definitions, we provide a base definition and modular overwrites which keep track of specific changes at DEV or PROD. This definitions are re-generated on each run, to ensure we never run on a stale setup. We prevent stale conditions on containers by providing means to recreate containers as well as associated volumes if the developer requires so. Developer application dependencies are not persisted into Docker images, but locked by specifying required version number. The development environment will cache downloaded dependencies thus preventing their download and ensuring quick startups.

Settings for developer environment (which modules should run and if we shall watch code changes) as well as credentials and SSL certificates are generated with defaults, but can be overridden by developers at any time.

When watching for changes, we ensure required modules are continuously compiled and the developer may request additional checks through configuration (test and/or lint the code).

Debugging is one of the most important tasks performed when it comes to development. Time invested into making it easier, gain visibility or just shortcut to relevant directories saves developers time.

To provide console access, we added two commands that enable us to login into containers and attach tmux sessions with predefined window layouts over all running processes and databases. Panels are grouped based on interacting components so that we can concentrate on a given screen to observe behavior, diagnose issues and work on a solution.

Since debugging sometimes requires a separate set of tools that are not part of the running applications, we developed a specific container with them, that attaches to the running environment and provides utility scripts that may help on the task. Scripts we develop to assist diagnosing a given situation in a better way or shorter time are included in that Docker image, to boost team productivity.

Failures may occur at any step of the stack and its important to have visibility across all stages. If a request is failing, we shall know if is due to bad Nginx mappings or because a service is failing to respond due to some reason. Our configuration provides the ability to make requests through the whole stack or to jump in at any stage to diagnosticate the issue.

Conclusions

Conclusions

Implementing a continuous delivery pipeline was a great journey which still lasts as we seek how to improve our pipeline up to deploys. Some benefits we gained from it are a quick feedback loop while developing the product, simplified configurations across multiple environments, short setup times, measurable quality and actionable items from reports as well as dockerized images that work in any environment — released for every working commit we push. Even if at first may feel a bit scary, the small batches principle turns out to help to develop faster and better.

Did you have a similar experience or are setting up your pipeline? Ping us — we will be glad to hear about your experience! Thinking about a new job? We are always looking for the best professionals!