What a smooth CI process looks like in modern tech companies

Filip Haftek
Russmedia Equity Partners
11 min readAug 27, 2020

How to take the best out of Jenkins and have a smooth CI process with pipelines and shared libraries

Sample Jenkins pipeline in Blue Ocean

This article first appeared on the Russmedia Equity Partners website.

In this post, Filip Haftek and Eryk Zalejski (DevOps Consultants at Russmedia Consulting) have gathered four best-practice examples from our companies on how we take the best out of Jenkins and have a smooth CI process with pipelines and shared libraries with: Holger Marquetant (Quoka), Konrad Cerny (Erento), Martin Widmann (Russmedia Digital) and Ferencz Farkas (Russmedia Tech).

Introduction

Have you ever thought how great it would be to come to work, write a few lines of code, grab a coffee and let CI do the rest?

It was our dream back in the days.

Or would you like to see all of your team members happy and relaxed, focused only on really important tasks?

It used to be the white rabbit we all chased.

But luckily, eventually we got there with Russmedia Equity Partners — an investor in and operator of online marketplaces, aggregators and SaaS solutions.

We were always focused on delivering good content, making it as fast as possible, to ensure our customers satisfaction, but without the perfect continuous integration approach this would be almost impossible.

But how to build the great CI/CD? What feature should it have? What changes in your company infrastructure or mindset does it require?

Please take some time and read the guidelines from four Russmedia Equity Partners portfolio companies.

No good pipeline without testing (Quoka — Holger)

Quoka is a general classifieds online marketplace in Germany with over 15 million visits per month and was founded 1983 as classifieds paper.

Here, at Quoka, we are now fully committed to CI and QA testing.
But it needed some time to come to this stage.

Automation was important in every decade with classifieds ads and for every medium we published to. We were used to optimize our workflows to avoid overhead and improve production times.

Since 2012 we are fully focused to online and moved our infrastructure to cloud 2018.

But what we missed in the meantime was an optimized workflow of deploying our applications to a test and production environment and we invested a lot of time and human resources into testing every release.

In 2017 we decided to change that, hired our first QA expert and we introduced QA test automation. He set up a Jenkins environment for that and with migrating to docker and moving to the cloud, we also implemented the first CI pipelines.

Our new applications and services are cloud native developed and microservice based.
They are running at Kubernetes at our AWS cloud.

The older applications are also running at an ECS Docker cluster at AWS.

All these repositories are built and deployed by Jenkins pipelines, if unit, functional and acceptance tests are green.
Every frontend app or module is tested by a complete regression test suite at develop stage and at least by smoke tests at commit pushes and pull requests.

BTW, we’re using the GitFlow workflow.

We are creating releases from the tested develop stage and deploy them to staging and live, but only if develop tests were green.
We’re all committed to a green-builds-only strategy: no green, no release!

Our regression tests are written with Selenium and run at Selenoid in our Jenkins cluster. At the moment we can run >100 chrome test sessions at same time, if our test environment is capable to. We automatically scale up our frontend application at test environment before we start the tests.

Today we’re running over 2.800 tests with every develop build of our main frontend application. This is growing with every new feature we’re developing, because for every user story we implement, we also write tests for it. Story is only done when tests are written.

We use Allure for reporting and it’s great that you don’t have to be a QA expert to understand what went wrong, at least by screenshots we have for every failure.

Quoka test results report

On top of this, we still do manual tests at staging before we deploy a release to production. But we only have to focus on the user stories, because we can trust our regression tests and green-build-only strategy.

You see, since the past we changed a lot in the last years. We increased massively our velocity and productivity not only by using CI pipelines and CD but also and mainly by automated tests.
But you need the commitment of the whole company, especially from stakeholders and product owners for a green-build-only releasement (they will get used to it after a time).
Without a rule to only deploy green builds, you don’t need QA at all IMHO. You have to rely on the tests and build quality. If it doesn’t matter for you if tests are failing, then you don’t need automatic testing.

Another point is your test environment. You will see, with more and more tests, your test environment needs more and more resources. You also have to adapt test flows and configs continuously to new situations.
Keeping test environment performant is a continuous challenge…

But when you are doing it right, you will get a higher productivity, quality and release frequency with less people. You don’t have to work on a story several times again after it was deployed because nobody tested it. You will have less bugs with side effects of new features, because they will be found with the regression tests.
And automated tests will do every test like it’s the first time doing it. No laziness by manual testers doing this test again and again every week (it’s human to get lazy here), and of course, no test will be forgotten.

Yes, first you have to invest in infrastructure, knowhow and experts. It’s perfect if you can start with hiring an expert with experience. You will save a lot of time and he will push you to do it the right way.

It needs some time to write the most important tests for features that are already live (I think most of you will already have a working application out there). But it will pay back after a while and you will get a lot of happy and motivated developers, product owners and stakeholders and, most importantly, a better product.

Go for it!

Support multi-languages easily (Erento — Konrad)

At Erento, the biggest european rental marketplace, we are running the microservice infrastructure which is orchestrated by Kubernetes on Google Cloud (GKE).

We give the freedom to write the microservices in the language of the developer’s choice.

Currently, we are actively working with NodeJS (NestJS), Java (Spring), Go, Python & the frontend is an Angular SPA but we also support old services written in PHP, Lua and others. Usually, the decision to pick the language is based on the challenge we want to solve. Over time we realized there is no good or bad programming language, just that each is slightly better at different things.

Every good developer I ever met is constantly learning and this freedom is great to foster the developer experience. But everything has two sides of the same coin. Using multiple languages comes with different strategies when it comes to the deployment process.

To overcome this burden your CI/CD has to be set up properly and each developer should be able to build & deploy the application as easily as possible. It is important to decide on some rules before giving a lot of freedom to everybody, for example:

– Every deployment has to run tests before and the tests have to finish successfully. And yes, you need to write those tests — as many as you can!

– Every service will be built with the same style guide.

– Every build needs to be successfully deployed to a staging environment first.

– etc.

We leverage the power of Jenkins and their pipelines to simplify the developer journey of deployment. The current look of our Jenkinsfile is the following:

Sample Jenkinsfile for Nest.js pipeline.

or

Sample Jenkinsfile for Golang pipeline.

Too simple to be true you say? No, that is really it — that’s all you need.

We define one template and we enforce the same build strategy per language. The template is not defining only how to build the service but also executing tests, linters and other important tasks to make sure your build is stable.

You can source those templates from your repository simply in the „Global Pipeline Libraries“ and it is no more than simple encapsulation of your existing Jenkinsfile. For more information, you can check the following article: Jenkins shared libraries.

Setup working environment for each single Pull Request (RMD — Martin)

At Russmedia Digital we use an on premise cloud solution called OpenShift. This adds additional layers on top of kubernetes and allows us to quickly deploy and scale our applications. We opted for this solution as we have the infrastructure and bandwidth in house and hosting it on someone else’s cloud would be way more expensive. We are keeping in mind that — should the need arise — we could still switch to a cloud provider for traffic peaks our datacenter can’t handle.

Development at Russmedia Digital is mostly done in either PHP (using WordPress or Laravel) and Javascript (mostly Vue applications using nuxtjs). Version control is done in a self hosted GitLab installation, for CI/CD we chose to use Jenkins, running on OpenShift.

Our goal with the CI/CD setup was to automatically build the applications and be able to deploy them automatically with as little work as possible. This was a requirement as certain products in our portfolio are sold to external companies and keeping all the installations up to date turned out to be very tedious and time consuming.

Our CI/CD approach is set up in a way that each merge request (you might know those as pull requests in GitHub) automatically triggers a build of the application for all set up clients on OpenShift. Once this build completes, the QA tasks (linting and testing) in the image are performed and on success this build is deployed in a special staging project where QA people (project managers, developers) can access it to verify the changes.

Thanks to the internals of an OpenShift service this is very easy to achieve. The build is triggered, the deployment is created from it and a service with corresponding public URL endpoint is created.

Merging is only available on a successful pipeline run. Thankfully this is perfectly integrated into GitLab which shows the pipeline status for the merge request and triggers a new run automatically should the branch be updated. Adding a special comment `rebuild` can also trigger a manual pipeline run should the need arise.

Like Erento we use the power of Jenkins‘ Global Pipeline Libraries where we have pipelines set up which just need to be configured in the project’s Jenkinsfile. For example the CI/CD pipeline for one of our project for our regional newspaper is as simple as 8 lines of code:

Sample Jenkinsfile for OpenShift Node.js pipeline.

By adding additional entries in the `clients` array we could trigger parallel builds for each of the merge requests allowing us to easily automate the building, testing and deployment of multiple clients.

Once the merge request is merged into master an additional run of the CI/CD pipeline is made. If this completes successfully the developer is presented with the option to have OpenShift deploy the build artifact to the live environment automatically. This can be triggered for each client individually.

To not clutter OpenShift a nightly job is run on Jenkins which cleans up any builds/deployments created by the automatic tasks. This has the benefit of keeping the system clean, but adds the small drawback that the services created during a merge request will not be available anymore the day after. As it is very easy to trigger a manual rebuild this turned out to not be this much of a hassle.

Support multiple clients efficiently (RT — Ferencz)

At Russmedia Tech we are developing Saas platforms in many vertical businesses such as job portals, real-estate portals and car portals. We deploy the same code to many clients and a well-crafted CI/CD pipeline is a must.

We found Jenkins to be far the best solution for our needs for many reasons. Let me mention the most valuable aspects on which relay the most:

  • Open source — we make a lot of builds and deployments daily, so paid CI/CD solutions are way too expensive.
  • Running on Kubernetes cluster — Jenkins is able to scale it’s workers simply by starting new dockers (pods) on its own kubernetes cluster — very cost effective and fast.
  • We can run parallel builds — this is one of the most outstanding features — we build and deploy for many clients as we want in parallel.
  • Pipelines are basically groovy scripts — as a developer you really do not need anything more than this.
  • Write your own pipeline specific to your application, you just need a bit of groovy knowledge.
  • A lot of great plugins — git organization, blue ocean .. just to list a few.

Summary

Well-suited CI/CD is a must in a modern IT company. As the IT world is moving towards pipeline as a code and shared libraries, every good pipeline should contains at least these 3 major steps:

  • build: with usage of containers, building your applications as Docker image and promoting them to the next stage (dev, staging/beta and production) is very common solution in most of the modern CIs.
  • test: if you do unit, integration and acceptance tests or just one of them is your choice, but one of our best QA engineers, Roman Ielovskykh always recommends to stay with all of them, as Quoka is doing for years, because it just pays off.
  • deploy: also the question if you should deploy each branch, only develop or master, depends of the process you have in your company (GitFlow, Github flow, others) as well as if you are able to spawn new environments on demand for any branch (which is possible in our companies thanks to Kubernetes infrastructure).

Great CI/CD:

  • enables companies to deliver good content fast and in a secure manner,
  • helps people avoid repeatable tasks which can be automated,
  • makes sure that the team can focus on delivering quality content to their customers.

Building such an approach requires time, some changes of process and what is the most important — changes of the mindset.

In Russmedia Equity Partners the portfolio companies not only build the great product, produce high quality code and develop modern CI/CD pipelines, but they also share all great output they produce (CI pipelines templates, code libraries, infrastructure blocks and much more) with others. With this approach every new member of RMEP can get on board easily and very soon also become the leader in the area they operate.

If you would like to know more about our Jenkins setup in GKE, bare metal, OpenShift, or how to introduce Jenkins pipelines, share libraries or any other topics around CI, please do not hesitate to ask in the comments — we are more than happy to help you :)

--

--