Search Team CI/CD Pipeline Structure

Gökhan Yılmaz Gökün
Trendyol Tech
Published in
8 min readMay 23, 2020
Trendyol search team pipeline CI/CD

In this article, I will describe how we design our pipeline in the search team at the trendyol group. We have some good features on our CI/CD progress. Like these,

  • Create sync branches for QA and Development team
  • Build the project with sonar analysis
  • Sonar quality gate status check
  • QA sync feature pipeline
  • Deploy a feature-based container for the test
  • Feature-based QA testing with isolating QA Environment
  • Automatically create Merge Request
  • Consul dynamic config push
  • Environment based deployment

Our goal as a team is to be able to deploy projects with one click. We have designed a lot of pipelines on this road and decided to build a structure that I will tell you soon.

Create sync branches

Everything starts with sending the code to a feature branch at the origin. As soon as the code is sent to the feature branch, our feature pipeline process is triggered and we start with the first phase, ‘Create QA Branch’. For the QA acceptance test project, we create a branch with the same name as the task. With this branch, the QA team starts writing their tests and they start developing the tests synchronously with development.

Build with sonar analysis

Our projects can vary in platform and language. We have projects written in Java, GoLang, Scala, Python, and Shell. Each language needs may arise for different platforms. We have created the base image to meet these needs and build all these processes and are still to build on these customized platforms.

I will talk about build processes based on java projects. Build operations begin with the download of maven packages. After the maven dependencies are downloaded, it continues with running our unit tests. The results of our unit tests are created as reports and this report is sent to the sonar server for sonarqube analysis. On the Sonar server, the results of this report are analyzed with the quality gateways that we have determined for our team. This results in improvement work being carried out again if necessary, and the quality gate process is repeated until the conditions are provided.

After the build and sonarqube tests are done, the sonarqube server is asked whether the relevant analysis is successful and the CI / CD process continues if the related result is successful. If the result is unsuccessful, as I have just mentioned, the improvement process is repeated and re-analysis is carried out. At this stage, the CI / CD process fails and the process does not continue.

QA sync feature pipeline

Our QA team is a backend test team. Therefore, the structures they test are generally APIs, consumers, producers, and business logic. In this case, we need a lot of acceptance tests and we do them completely automated. This automated structure works fully integrated with the CI / CD process. In these automation tests, we meet the test data we need from our isolated test environment.

This isolated test environment consists of data injecting codes and an elasticsearch data source. These codes are indexed to elasticsearch by creating a data set from real data to meet business-related requests.

As you can see from the pipeline, our data source is this isolated structure while these tests are running and our APIs are using this source. Since our APIs use this resource while the tests are running, our business based tests provide control of the business logic by the relevant data sets.

As soon as things start to develop, our QA team writes tests for the same job in parallel. When necessary, with the writing of the tests, the data sets required by the test are prepared and indexed to the isolated test environment. In this way, the feature-based code part is tested with unit tests, and accuracy is guaranteed with acceptance tests. And we do this completely autonomously before the merge request is opened in our feature pipeline.

By using this structure, we are completing our tests, without disturbing the existing structure, and by testing new jobs correctly, without having any problems.

Deploy a feature-based container for the test

I had previously shared an article about how our deployment processes are progressing. You can access this article from the link here. The basics of the procedures I will explain in a short time are available in the related article.

I would like to talk about our feature pipeline structure that we have just put into practice. Our feature pipeline is designed based on a need, and we think it’s very useful and we want to share it with you.

As a team, our goal is to design a fully feature-based CI / CD process and to have a complete production-ready code base after our feature jobs are merged. To do that, we had to have a fully tested production-ready code.

We thought of a way to achieve this. Writing QA testing, and feature deployment if simultaneously performed in the feature-based pipeline can we do this both at the same time and feature pipeline results in our hands fully tested production-ready would be a code, I thought, and we began to develop.

As the first phase of this structure, we did the work of `Create QA Branch` which I mentioned earlier and started to develop synchronously with the QA team. The second stage was the deployment of this feature based code to the stage environment. We implemented this deployment as follows, using the deployment structure in my article, which I mentioned earlier, deploying the project by naming it as ‘branchName-appName’. We used the codes of our works as branch names, for example, SS-1867 and when we combined this with our project name, we created a feature based on a unique deployment name.

We implemented this deployment by applying deployment.yaml and service.yaml files to Kubernetes. After this process, we now had a feature-based application running in the Kubernetes environment. However, we had a problem, how would we perform request routing? We decided to use ISTIO to solve this problem. We decided to perform request routing using the virtual service feature of ISTIO.

feature_deployment.yaml file for Kubernetes deployment
feature_service.yaml file for Kubernetes deployment

We started developing ISTIO virtual service and thought of the matching rule for routing. We decided to make this rule with a customized header, which we will give on the header. We decided to pass branchName as key for our customized header and branch name as ss-1867 for value.

In this way, we have defined the matching rule for ISTIO virtual service, which you will see below.

virtual_service.yaml file for ISTIO virtual service routing

When this customized header information came with the request, ISTIO routed the virtual service request to our new feature based deployment. In this way, we have now been able to test our feature-based deployment in a staging environment, and for QA testing, we have been able to test our code without merging it into the development branch.

If there is a feature-based deployment that matches, that is, this virtual service definition detects the branch and redirects the traffic to this deployment by performing request routing. If there is no match at this stage, traffic is directed to develop based deployment. Because of this situation, we return the ‘branchName’ header information that we previously sent to the request header to understand and verify that it was directed to a feature-based deployment. When we see this customized header information in the response header information, we understand that request routing has been done successfully.

This header information is of course only valid as long as the feature branch lives. During this period, the development process continues. Both feature development and feature-based test improvements are made. Continuous feature development is being developed until production is ready.

After completing feature development and testing, we are now on the merge request opening section. In the meantime, feature-based deployment, service, and virtual service developments that we no longer need are cleared from the stage environment.

After the Merge request is opened, team members review the code and tests. The review processes are repeated until the merge request is approved, with the processes of making comments, re-development, if necessary, and retesting.

Deciding to perform the merge means that the feature has been fully tested and that the production is guaranteed to be ready. After this stage, the code is merged and packaged to advance to the pre-production environment.

Our feature-based pipeline looks like this

Feature-based pipeline

Our base pipeline progress

Our base branch is the develop branch. We merge operations from feature-based branches to develop branches. After this process, our develop based branch operations start working.

Ours develop based pipeline includes the following steps,

  • Config Push
  • Build
  • Sonar Check
  • Deploy To Dev
  • Deploy To Stage
  • Test
  • Deploy To PreProd
  • Deploy To Internal
  • Deploy To Prod
Develop-based pipeline

If I need to explain these steps to you briefly,

In the `Config Push` stage, the configs in our repo are sent to our configs on the consul and the configs of our pods are updated without a restart. We plan to explain this structure in more detail in another article.

In the `Build` stage, build and sonarqube analysis operations are performed based on the development branch. The codebase built at this stage is now a production-ready code base.

In the `Sonar Check` stage, the develop based code analyzes that are built and analyzed are checked according to the quality gateways and the CI / CD process continues successfully if the conditions are met.

There are generally five environments in Trendyol. These are development, staging, pre-production, internal, and production environments, respectively. The ‘Deploy To {ENV}’ phases in our CI / CD process represent deployments to these environments and you can find details in my deployment article I mentioned earlier.

After deployment to development and staging environments, QA test automation is repeated and rerun to predict and test any problems that may occur during merge-operations.

After this stage, there is a codebase that has been tested and approved as production-ready. This codebase is now deployable to pre-production, internal and production environments. However, for now, we are automatically going up to the pre-production environment and at this stage, we are waiting for approval for UAT tests. In the future, we will remove the manual process here and switch to a completely autonomous process.

Thank you for reading so far and experiencing our experience with us. Thanks on behalf of the search team and trendyol group.

--

--