How we build a fast new CI process

Ioannis Christodoulou
Cytech
Published in
8 min readNov 19, 2018

Note: this is a collaborative article written by the following members of Cytech Mobile’s software engineers team: Ioannis Christodoulou, Themis Dakanalis, Manolis Alefragkis, Haroula Andrioti and Pavlos Lykakis

Introduction

We, at Cytech Mobile, decided to write an article about our new Continuous Integration process that really helped our product be better in terms of efficiency and make our customers happier. Cytech Mobile is a technology provider in the area of Telecoms, focused on mobile messaging, marketing and payments. We develop our own software platform, called mCore, hosted in the Amazon Cloud and we have customers all over the world. We embraced the CI/CD process some years ago and we achieved to have an automated mechanism for testing and integration for the whole code base. As our platform is evolving our customers’ needs and expectations change. We need to present new features faster and solve bugs without creating new ones efficiently. This led us to rethink the whole CI pipeline from the beginning and re-engineer it so it meets two main goals. Be faster and fully cover the product’s code base.

Release Life-cycle:

The source code of our platform is organized in three main Git projects, hosted on our remote repository — powered by GitLab:

  • Commons: contains all the functionality that is used by the other projects. It is released as a JAR file and it must be in the class-path of the other modules of the platform.
  • Back-end: implements all the business logic of the various services the platform offers. It is released as a fat JAR file and runs as a standalone Java application.
  • Front-end: implements all the required graphical tools for the administration and the operation of the platform. It is released as a WAR file and runs as a web application on a Tomcat Web Server.
mCore Platform release life-cycle

Each project has its own set of Unit (UT) and Integration (IT) Tests — implemented by using the JUnit, JMockit and Selenium technologies — which assure the proper operation and the quality of our production source code. All automated tests run on our CI Server-powered by Jenkins- either upon a push in the respective Git project or every night at a scheduled time. After all tests of a project have passed, the respective artifact — JAR or WAR— is built and is uploaded to our artifact repository-powered by JFrog Artifactory. All the life-cycle management of both our internal artifacts and their external dependencies is being performed by using Maven.

Previous Status :

Our previous CI process was composed of a number of tasks — Jenkins Jobs — that had to get executed in a very specific order, forming that way our CI pipeline. Every task in the pipeline was depended on the outcome of the previous ones, while all of them were using some common resources— database, file system etc.

Old CI Pipeline — powered by Jenkins CI Server

The first task of the pipeline was responsible for creating and running a VirtualBox VM provided with all the required by the platform resources — configuration files, database etc. All subsequent tasks of the pipeline were running on that VM which was available as a Jenkins Node. As a technology stack for the creation and the life-cycle management of the VMs we chose to use Packer and Vagrant.

The rest of the tasks were responsible for running the automated tests for all of the modules — Git projects — of our platform. One of the most important input parameters of our CI pipeline was the version — Git branch — of the source code that we wanted to run our automated tests for. Each task had first to checkout the provided branch from the respective project before running the tests. A very important assumption we made here is that the provided branch had to exist in all projects.

In practice, the only common branches among all the projects were the master and the develop ones, leading us to the choice to run automatically our pipeline every night only for those two branches. For the rest of the branches we decided to run only the single task of the pipeline that was responsible for the tests of a specific project upon every push to that project to our remote repository. However, since we could not run the whole pipeline from scratch— remember the “branch must exist among all projects” requirement — we had to run the tests to one of the already created VMs for the develop and master branches.

Problems to be solved

So, taking into consideration all the foregoing points in terms of the previous status of our CI Pipeline, we can easily understand that a couple of critical issues had been raised for us to face.

First and foremost, the procedure was applicable only on specific project versions — Git branches master and develop — and therefore, we weren’t able to test and build separately any hot fix or new feature we were willing to apply at the production environment in the platform.

When it comes to an automation process the performance, of course, should be considered as a significant factor.

As there were many unresolved dependencies between the modules of the platform — Git projects — the process of running their tests had to be sequential and hence for each module of the platform we needed to wait for every other one to be ready. In fact, there was no way to run each module separately while at the same time the whole CI pipeline was taking place into a VM affecting dramatically the performance of the routine.

Initially, the design of our CI pipeline did not support build on Git push, the procedure could only be triggered either manually or at scheduled recurring time slots. Even when this specific mechanism became available, it could work only for the develop branch. As a result, in order to deploy the updates of the project the pipeline had to run two times before everything become ready for the production state, one for the develop branch and finally one for the master branch.

New requirements

So, what would a better CI pipeline for a small team like ours look like?

Our test automation and build tools are key for moving fast in an ever-changing world, trying to meet customer demands by delivering fixes and new features in a timely manner. We decided they must be front and center in our pursue for moving to CD (Continuous-Delivery). Ultimately, our goal is that each change goes from a developer’s machine to our CI tool and from there to our customers in an efficient and automated way, with a rich test suite to provide confidence of moving fast and not breaking things!

For our needs, it would all come down to being as fast as possible — whilst touching as much of the code base. A re-designed stable, reliable and fast CI would start making sense in our every-day use, providing feedback within a reasonable amount of time, i.e. less than 10 minutes.

Essentially, especially for hot fixes, our options are to either wait for the CI to verify the fix, or to skip it due to time constraints and — god forbid — apply it manually.

Having a fast and robust CI pipeline means that there is no excuse for falling back to the second option.

How we solved them

In order to accomplish a fast new Continuous Integration (CI) process we used docker. It has been one of the most important tools for our project. Thanks to docker, the time to build/deploy our project has significantly been reduced. Docker is faster than a Virtual Machine (VM), since it doesn’t need a Hypervisor or Guest Operating System.

Moreover, once we push a new feature to our project Jenkins starts an automated process which sets an environment to run the build job. When the process is completed, it provides us with direct feedback. No more waiting for other apps to run first and then run our changes. As a result, it won’t create any conflicts. Each build job runs independently of any others running in parallel.

New CI Pipeline — powered by Jenkins CI Server

In the example below we present the key points of the process concerning the testing of the front-end. More specific Jenkins starts a docker container from an image containing the software environment where we will run our build (Sets an operating system, Java, Maven etc.). After that, Jenkins proceeds in stages, in order to prepare the environment, build and test the project. Our current stack includes Tomcat as a servlet container, PostgreSQL as the database, and Selenium as the UI and end-to-end testing framework. Also Maven as the tool that builds the app and running unit and integration tests.

i) Prepare Database. In this stage, Jenkins starts a new container from the PostgreSQL image. This database will be used in the current pipeline.

ii) Check out the common library. This is a library that contains all the necessary code and dependencies used by the back-end and front-end applications.

iii) Start Tomcat in the current container.

iv) Start Selenium. Start a new container which will run Selenium. It will connect to the main container’s Tomcat server that hosts the application.

v) Maven Tests. In this stage, Maven runs our automated suite of unit and integration tests, in the current container.

A sample build job using our new pipeline

Finally, when the build/deploy is over, all started docker containers are deleted. If we want to perform a new build for one of our projects — e.g. due to new code being pushed, or due to a merge request — the above procedure will start again from scratch.

The new pipeline gives the opportunity for each build job to run independently, having its own database. We can have an unlimited number of parallel build jobs, each running in its own docker container.

Conclusion: Best practices and what to avoid

In conclusion, we have presented our new Continuous Integration (CI) process that really helped our product, mCore. We showed the Life-cycle of the source code of the platform, how was the status before and presented the problems that occurred because of this procedure. Ultimately, we showed how we managed to solve the problems using Docker and meet the new requirements — to be faster and efficient.

Going through this process our experience has led us to the following conclusions.

Firstly, when developing a new CI pipeline, you should avoid having jobs that depend on each other.

That way you will be able to run jobs individually, leverage concurrency and parallel execution of jobs and eventually, the CI procedure itself becomes faster and more flexible.

Moreover, be sure to use the right and most suitable tools for your needs. In order to do so, it is highly important to be up to date with new tools and technologies that comes up.

Consequently, your CI process should offer efficiency, reliability and quick feedback.

--

--

Ioannis Christodoulou
Cytech
Editor for

All-Around Dev | Clean-Code focused | Java Enthusiast