CICD and QA: A Complete Automated Product Pipeline

Explaining the concepts with a practical example using GitHub Actions and SonarCloud

Roberto Sannino
LARUS
9 min readAug 8, 2022

--

Today’s software companies use agile methodologies to speed up the development-feedback cycle of their products. The aim for rapid feature delivery, brings challenges in terms of product quality assurance as well as for automatic, instant and painless deployment.
Consider using DevOps methodologies, as CI/CD, is essential to simplify, strengthen and automate the whole product pipeline in a reliable way.

In this article I will show you a practical demonstration on how to automate a CI/CD product pipeline easily, using GitHub Actions to automate builds, tests and releases, and SonarCloud to keep track of the code-base quality. Furthermore, I will show you an option to automate the product deployments.

Introduction

https://it.m.wikipedia.org/wiki/DevOps

Terminology

Before we get started, let’s go over some terminology.

DevOps
Methodologies that have the objective of creating a culture in which software design, testing and release can take place quickly, frequently and efficiently. In doing so, developers team (Dev) and operation team (Ops) become integrated into a single, cohesive one.

CI/CD
CI: Stands for Continuous Integration and concerns the practice of integrating the developer’s work into the main branch in a fast and automated manner. This phase should rely on automated tests, builds and validation to ensure safe merge.
CD: Stands for Continuous Delivery, it covers the nowadays necessity of delivering release changes quickly in production environments. It can also have the meaning of Continuous Deployment, adding the automated deployment in the product pipeline.

QA
Methodologies for Quality Assurance focused on reducing the risk of errors and bugs and rising the confidence of the software by maintaining high-quality standards for the code base.

Our Goal

As stated before, the main goals of an automated DevOps product pipeline, on which we will be focused here, are:

  • to merge securely the new developer’s code into the main branch (CI)
  • to maintain high-quality standards for the code-base (QA)
  • to generate new deployable services as soon as possible (C.Delivery)
  • to leverage and simplify the process of product deployment both in staging and production environments (C.Deployment)

The Application

In the following, will be used as an example an application consisting of two Java Spring-Boot micro-services and a React UI, deployed with Docker.

The application simply shows a web page with two messages coming from the respective micro-services (mserviceA and mserviceB).

The micro-services’ docker images are build using jib maven plugin with eclipse-temurin base image, while the ReactUI build resulting static files are hosted using an Nginx base image.

The overall project structure is the following:

  • .github (GitHub Actions for PRs and Product Release)
  • deploy (project-build, docker-compose files and Nginx resources)
  • mserviceA (Spring-Boot Java BE micro-service module)
  • mserviceB (Spring-Boot Java BE micro-service module)
  • ui (React FE application module)

Of course the concepts and approaches used here are valid and can be used for any other type of application and deployment mode.
Moreover, the tools used, GitHub Actions, Github Container Registry and Sonarcloud, can be replaced according to your project and company needs.
As the article goal states, the following reading wants just to show one of the possible implementation of CI/CD & QA on a real, common, use case.

Configure the Pipeline

Continuous Integration

Let’s see how to use GitHub actions to perform both build and tests when a Pull Request (PR) on the main branch is performed.

The first thing to do is to configure a .github/workflow in our project, for this purpose, has been decided to have 3 different workflows, one per module (mserviceA, mserviceB, ui), that will be triggered by a PR related to code changes in a specific module.
Doing so, the actions will be performed only when really needed: if a PR makes changes only to the mserviceA module, there is no need to build and test the other modules.
The workflow for our mserviceA looks as the following:

The workflow’s jobs are pretty simple:

  • triggered by a PR or push on the main branch that includes files of the module mserviceA
  • uses an ubuntu runner with java 11
  • caches maven repository to speed-up the workflow
  • runs maven build with tests for the module

If the module’s build breaks (e.g here we changed the code without updating the tests), the PR will be marked with a red X (some checks have failed) and the error details will be available in the runner logs.

PR’s new code breaks in the build and test phase

By performing tests and build of the module, this workflow is useful to avoid merging erroneous code into the main branch, but it does not provide any information about quality assurance of the new code.

Quality Assurance

To ensure high-quality standards for the code-base, we can use SonarCloud. This tool makes a great job for us by computing analysis on Known Bugs, Vulnerabilities, Hotspots, Code Smells, Tests’ Code Coverage and Code Duplications, both overall and for the newly written PR code.

SonarCloud-GitHub integration is really easy, from the SonarCloud web app, we just need to sign in using the GitHub credentials, import our organization and then configure our Sonar projects.
For this last step a deeper discussion is needed, in fact, you can easily directly import your GitHub project in sonar or configure them manually. I have collected some Pros and Cons on this choice:

Direct Import

PROs

  • instant integration with your repository project
  • no need for additional CI workflow specifications
  • sonar analysis are not performed in the workflow runner, saving time and therefore money

CONs

  • Analysis are performed project-wide, there is no possibility to have per-modules analysis (unless each module has its own repository)
  • Difficult to let the test results and coverage be red by sonar without having them in the repository files, this means that some of the build subfolders generated by the test frameworks must be pushed and not ignored .

Manual Import

PROs

  • per-module analysis
  • since the command is executed in the runner, there is full control over test details and coverage files generated by the test execution.
  • full and easy control over sonar analysis runs, how and when, without learning how to configure SonarCloud

CONs

  • manual integration
  • additional CI workflow specifications
  • sonar analysis are performed in the workflow runner

It is easy to understand that the PROs of an alternative are the CONs of the other. For this project we choose the second approach, that was the best suited, in fact the CONs are leveraged by the circumstances:

  • manual integration is a one time job and requires 2 minutes per module
  • additional CI workflow specifications costed one row in our workflow (it will be shown in the following)
  • workflows are free for open source projects and however the sonar build lasts usually less then one minute for medium-sized projects.

So now let’s see the crucial part of the mserviceA sonar-project.properties and the change made to its workflow in order to trigger the sonar analysis.

Leaving out the project and host details, all that is left is just the test coverage and exclusion configuration, here jacoco plugin was used to report unit and integration tests results and the main service class was omitted from the code coverage just as an example of how to use this property.

Now check it out how change our PR worfklow for the mserviceA, in the last job step we just need to add the Sonar analysis to our build command, using our SONAR_TOKEN secret to authenticate against SonarCloud on the right Sonar project.

With this integration in place, each PR now will also be checked by SonarCloud and depending on Sonar settings (minimum code coverage percentage, etc…) the PR’s will be marked as failed or not.

PR’s new code has passed SonarCloud analysis as well as build and tests

The SonarCloud project page will also show the module summary.

SonarCloud module summary

Continuous Delivery

Now that the CI & QA pipeline is in place, we need to focus on how and when versioned services’ packages are created, used and automatically deployed in the right environments.
Usually, developers work on a local and a staging environment, while the first is easy to update using local images, the second one need some sort of automated modules build and deploy. Having staging/dev environments, is also crucial to perform manual testing; in fact, even if the unit and integration tests reach 100% of code coverage, it is still not possible to say that the code is really safe or correct: order of operations, ui rendering, different platforms and many other factors, can still affect our product.
To address these needs, another workflow has been configured that builds the project modules and push the resulting docker images to the GitHub Container Registry (ghcr), these images are marked as dev.

Differences with the previous workflow:

  • builds all the modules
  • creates, tags and pushes docker images on ghcr
  • does not execute tests (executed after every PR, no need to waste time and money here)

Note: in the *on* directive of the workflow, we specify a new push-on-branch as the trigger for the workflow (dev-env), alternatively the main branch can be used, so that each new merged PR creates new dev packages, it is up to you to choose whether creates dev packages in a fully-automated way or with an on-demand automation.

After each run the repository packages will be updated and will be available for pull.

GitHub Container Registry- dev builds

Similarly, another workflow is present, that push docker images for each published release, these containers are intended for production environments and are tagged with the respective release version

As you can notice the images are no more labeled as dev and their version is ${GITHUB_REF##*/}, that is the release name we used to publish it on GitHub (e.g 1.0.0-release)

What about automated deploy?

As automated deploy for our environments we choose to use WatchTower, a process for automating Docker container base image updates. This service, containerized as well, act as a monitor of the configured registry and at a scheduled time checks for new versions of the labeled services in order to gracefully update them. In our case when the BuildDevEnv workflow finishes, the new images will be pushed in the container registry and will be found by watchtower, that in turns will upgrade our environment. Here the staging environment docker-compose file

Each service has the cicdqa scope label, used by WatchTower to know which services must be monitored.
The WatchTower service has been configured to run a scan every minute and to update also stopped container, while removing old images.

Although product environments often relies on cloud orchestrators with in-built strategies for zero-downtime deploy, this approach can be a starting point for small projects and for a production environment it could be possible to schedule the WatchTower scan during non-working hours, e.g

Handling automated deploy errors
Unfortunately WatchTower lacks of a rollback-on-failure feature, so if for instance a new image requires a change in the docker-compose file, then the deploy will break, leading to partial or full unavailability of your application. To overcome those kind of problems we can rely on the environments overall architecture and on the use of best practices for releases: using staging/dev environments to test both features and deploy before publishing a release should be a must.

Conclusion

In this article we have briefly explained why DevOps practices must be taken in consideration in a product pipeline and how to actually apply them in a common use case scenario, using CI/CD and QA to safely and automatically merge, release and deploy packages, while maintaining a good overall project quality.
Thank you for reading this article!

Notes

The project used for demonstrating the concepts, is open and can be found here, you can also use it as a template and follow me on Linkedin.

--

--