Code Review Without Queues

Stanislav Davydov
Wrike TechClub
Published in
9 min readSep 1, 2021

Programmers write code (surprise!). If it’s a pet project, you’re free to do anything you want with your code. But if the project is developed by several people or a team, sooner or later you’ll have to do a code review. But who will review your code? How do you speed up this process? How can you evenly distribute merge requests among reviewers? There are a lot of questions, but the answers aren’t so obvious. In this article I’ll tell you how Wrike manages the code review process and why we had to make a proprietary solution.

What we had to deal with during autotests development

At Wrike the process of delivering a new product release is implemented so well that we deploy at least once per day — sometimes even more frequently. This, among other things, is largely due to automatic testing. We have tests of all sorts and colors: unit tests, API tests, UI tests based on the Selenium framework, screenshot, mobile, load, and even tests on test utilities.

In mid-2016, we rewrote the UI test framework by implementing the PageObjects-Steps-Test pattern. Writing tests has become so easy that there are now over 100 authors (active — more than 50). Everyone wrote tests, from QA engineers to backend and frontend developers. But when a lot of people commit to a project, even in spite of a well-defined code style, it’s impossible to cleanly maintain the code without a review, which is carried out by a team of QA Automation engineers (and at that time there were only 10 of us). Also, merge requests (further in the text — MR) should be evenly distributed among the reviewers.

You can probably guess which problem arises from all of this — the review becomes too long. But after the initial review, edits may be required, sometimes more than once, then another review, etc. Each iteration of the review edits takes a lot of time. No one wants to put off their tasks and switch to doing reviews. Also, GitLab sends notifications about changes in MRs by email (few people constantly check their mail these days), and on top of that with some delay. All of this resulted in over 100 hours of average review time in the 80th percentile (without even counting weekends).

New tests or fixes of old ones are often related with the changes in the product. A prolonged review slows down the release of the frontend or backend branch, which, in turn, breaks the release plans of teams. Therefore, we wanted to reduce the review time to 48 hours at most (in the 80th percentile).

Possible solutions to the problem

To reduce review time, we started with the simplest idea: we decided to explicitly distribute MRs among the reviewers. Two leads from the team took turns every day to look through the list of new requests in GitLab and assign engineers from the team, trying to keep approximately the same workload for each reviewer. This didn’t help much, because the main problem with notifications at each stage of the review remained unresolved. Furthermore, it was a waste of the leads’ time, distracting them from more important tasks.

At that time I decided to develop a service that would be able to monitor all changes and send notifications to those responsible in Slack.

Initially, the only thing the service was required to do was to promptly send notifications about changes in the MR and its transitions between stages of review, edits after review, and back. To understand the changes in MR, I decided to store its last known state in the database. This also made it possible to gather statistics on the lifetime of requests and on all authors and reviewers.

First iteration: rigged up service and automated reviewers assign

Every utility needs a name, at least not to call it “the bot that helps with the review”. Especially, when it is a project that you yourself made from scratch. I wanted the name to reflect the Git system with which the bot works, and came up with “JiGit” — briefly and about Git. In this article, I’ll refer to it by that name.

At first, JiGit was rigged up nightly in a garage. Therefore, I chose the most familiar and simplest way to implement it — in the form of a test. We already had unit tests in the project and we knew how to run them in CI. So why not?

What did the “test” do from the technical side? First, open merge requests for our project were requested from GitLab. Then, for each request, the current state was compared with what was saved in the database. The author and reviewer received notifications about specific changes. At the end, the state saved in the database was overwritten with the current state. We had about 20 MRs at any given moment. One launch on all MRs took between one to one and a half minutes. The “test” was launched every five minutes so the notification’s delay time was between zero and five minutes.

The first step to further develop JiGit was to automatically assign reviewers.

We took into account:

  • Reviewers’ workloads to ensure an equal distribution of MRs.
  • Reviewers’ vacation and sick days (otherwise, MR may wait a few weeks, and this didn’t meet our requirements).
  • Code complexity, because you can either change the name of a variable or refactor the entire project. Not everyone has enough expertise for complex MRs.

I took the reviewers’ workload from the database for the last 60 days. With this approach even if a reviewer goes on vacation, they won’t come back to a pile of MRs. We mark vacation dates in our duty schedule because we need a responsible QAA engineer to release a new version of the product daily. From this schedule JiGit gets information about absent reviewers.

We solved the problem with code complexity by introducing a set of labels in GitLab. The author of the merge request adds a label with suitable complexity, and JiGit knows who can review MRs with that complexity. We’ve made five levels of complexity and described which case to put each level. If the author forgets to add a label, then JiGit will remind them.

When the MR’s code merges with the main branch, JiGit calculates review time and saves it to the database. By doing so we receive the review time metric.

Second iteration: SpringBoot and Web interface

As time went on, JiGit needed to be connected to other QAA projects, which grew to an overwhelming size as well as the tests’ code base. Also, developers from other teams learned about the tool and wanted it for their projects. The architecture in the form of a JUnit test wasn’t flexible enough, and the coherence was so high that any change in the code turned into a lottery — no one knew where it might break. It was time for global refactoring.

I’ve decided to move JiGit to the SpringBoot framework.

There were several reasons:

  • By that time, our team already had sufficient expertise in SpringBoot.
  • If you need a 24/7 service, SpringBoot is the first framework that comes to mind.
  • As the number of projects grows, the time between each consecutive launch increases and, as a result, the delay time of JiGit’s response to changes in MRs.

To solve the last problem I also decided to work with GitLab webhooks instead of explicit calls to the GitLab API. This also reduces the load on GitLab.

Since I started refactoring, it was the best time to solve the accumulated problems: strong coherence of the code, the difficulty of flexible customization for the needs of different projects, ubiquitous hardcode, and others.

For example, there were a lot of “if” statements in the code. If the project that you want to manage with JiGit has a different review process, you’ll have to add a few more “if” statements. It was getting scary, especially for my team, who will have to figure out how it works.

Here’s a small example of the code:

​​if (!equalLabels(gitlabMr, dbMr)) {    List<String> addedLabels = getAddedLabels(gitlabMr, dbMr);    List<String> removedLabels = getRemovedLabels(gitlabMr, dbMr);    if (hasReviewer(gitlabMr)            && !addedLabels.contains(REVIEW_OK)            && (removedLabels.contains(TO_FIX)            || addedLabels.contains(WAITING_FOR_REVIEW))) {        infoMsg(gitlabMr.getReviewer(), NEED_REVIEW_ASSIGNEE            .format(gitlabMr.getWebUrl()));    } else {        infoMsg(gitlabMr.getAuthor(), LABELS_CHANGED            .format(gitlabMr.getLabels(), gitlabMr.getWebUrl()));    }}

As development progressed, the need for the JiGit Web interface became more and more acute. And, as always, a fortunate combination of circumstances helped. The website team found out about our service and wanted to use it in their project, where they had the same problems with the review. I’ve connected their project to JiGit, and at the same time we discussed what both teams wanted for the UI (because UI also needed the backend part), wrote the technical task, and the team developed and launched the interface for JiGit (kudos to them for that!).

With the help of Web UI, you can specify a list of reviewers for each project, as well as the date of their vacation or sick leave. There you can also view metrics for review, build graphs, connect new projects in JiGit, and configure them.

What problems we’ve solved with JiGit

We received a huge profit from our JiGit integration.

Code review time decreased. As expected, review time began to decline. Now the average time fluctuates around 24 hours, which can be seen on the graph. And this is despite the fact that the number of MRs has grown from 150 to 350 per month!

Merge Requests Lifetime

Metrics help control the review process. By having data on the duration of MRs review, we can investigate the reasons for the increase in metrics and respond to them in a timely manner.

The review process has become more transparent. JiGit helps you understand the next steps of the review at each stage, move through the process, and fix errors that can potentially delay the review. For example, it writes about failed pipelines or reminds you to add or remove necessary labels.

Merge requests are assigned automatically. Now even those who are unfamiliar with automation can write their own tests, create a merge request, and not think about who should review their code. JiGit assigns a reviewer automatically.

The test development process can be controlled at all stages. It became possible to control the development of tests at all stages from creating a MR to merging a branch into the main. Previously, you had to rely only on the code authors and reviewers, as well as on the pre-commit hooks of GitLab. There was an opportunity to accidentally or deliberately bypass the accepted review process and merge any code into the main branch.

Now JiGit won’t let you do so. For the code to get from developer branch into the main, the following is necessary:

  • All pipelines have passed.
  • A merge request is created.
  • A reviewer from the list of approved ones is assigned.
  • This assigned reviewer is the one who set the “review OK” label.
  • No additional commits arrived after the review.

Now we’re sure that no bad code will leak into the main branch unless the reviewer misses these changes by mistake. But it’s impossible to completely exclude the human factor. But could I connect ML and teach JiGit to make an automatic review?

A fast and clear process helps to involve colleagues in writing tests. Along with extensive documentation on the use of the test project, the review process, which has become transparent and fast, no longer discourages those who want to write tests. At Wrike, autotests are written not only by the QAA team, but also by QA engineers and frontend and backend developers. Many thanks to them for that!

Engineers from other departments started using JiGit. It quickly became famous in the company’s engineering department. Many teams began asking to manage their projects with JiGit to solve similar problems with the duration of the review, the transparency of the processes and the collection of metrics. The QAA team’s approach to the review process was to their liking, and some teams even changed their review processes to make it easier for JiGit to be connected to their projects and get more functionality from it.

Further plans for service development

JiGit has a bright future, at least in Wrike. Currently, it’s connected to six projects, and we’ll connect it to the rest.

We’re working on improving functionality both for individual projects and in general.

Some of the main tasks for the future development are:

  • Automatic determination of merge request complexity.
  • The ability to assign reviewers depending on the part of the project in which changes were made.
  • Support Wrike API.
  • Initial service configuration via Web-UI to deploy it from scratch.

In the bright future, we want to publish the project as an open source, because surely many of you have or will have similar problems with the review. And the power of collective conscience can help with development.

What does the review process look like in your company? Please share your experience and ideas about the service in the comments. Thank you for your attention!

--

--