Code is never good enough — how we do code reviews at Greenbyte

Greenbyte Technology Blog
Greenbyte TechBlog
Published in
6 min readJun 4, 2018

By Mikael Baros

This story will tell you just how seriously we take code reviews at Greenbyte, and hopefully inspire you to start taking code reviews (more) seriously. We also share how poor focus on reviews can go (terribly) wrong.

This won’t be an anecdotal story about how we discovered the value of Code Reviews. The fact of the matter is that we, from the get-go, took code reviews seriously at Greenbyte.

In fact, I was a little surprised when I had just joined. People were absolutely brutal about their feedback but never rude. Maybe I should say they were perfectly candid, and nothing else. Source code always, with no exceptions, turned out better after a review, and many times it led to a bug being caught before it went out to production.

Since we think that code reviews are one of, if not the, best ways to ensure product stability and code quality, we have been evolving our process continuously.

The beginning

We started out with a simple and powerful system. Every merge of a feature or hotfix branch into the master branch would be done through a pull request. Most of you are pretty experienced with pull requests, what with Github’s market penetration, but for those of you that aren’t, I will explain a little. A pull request is basically a more advanced diff of your branch, against the master branch. It allows your reviewers to traverse your commit history and place comments and tasks on lines of code.

Early we decided on three cardinal rules:

  • All normal tasks must be completed before a merge
  • Tasks that start with “Consider” can be considered complete without any action being taken
  • Comments must be read and considered

These rules allowed reviewers and authors to know what to expect, and how to behave. Even though it could be demoralizing going into your pull request the next day, only to discover that a particularly meticulous colleague had placed 35 tasks and 99 comments (and yeah, there were no Consider in there), you always had a huge sense of pride afterward. You knew that the code you were about to merge into master was as good as it could get, through a collaborative effort.

Scaling problems

In the beginning, our policy was that every engineer had to approve every pull request. As the team grew, this quickly became unsustainable. But how do we keep the quality we were getting, while still reducing the number of reviewers? At the same time, we didn’t want only a selected few to review everything, as reviews are also a great way to spread knowledge about the code base.

As we’re data-driven at Greenbyte, we turned to the data for answers. Going over all our pull requests, we realized two things:

  • At least one reviewer should have intimate knowledge of the sub-system being reviewed
  • More than two serious reviewers did not increase the overall quality

This led us to the review wheel.

The review wheel (of fortune)

We set up a wheel that contained the names of all developers. Now, selecting reviewers was as simple as:

  • Select the next person on the wheel as a reviewer, no exceptions
  • If the first reviewer is already proficient in the current sub-system, select the next person as a reviewer. If not, traverse the wheel until you find someone proficient in the sub-system, select this person as a reviewer.

The above system helped us maintain a high quality of reviews, while also spreading the knowledge. It helped us to keep key resources from being constantly bogged down in reviews, and anyone reviewing together with an expert in the sub-system automatically learned a lot about the system itself.

But wait, what about security?

As we matured as a software shop, we also realized we had to step up and have a software development security process. For this, we selected the excellent OpenSAMM and started our journey towards compliance. Before, security was just “good practice”, and relied on our coders taking it seriously. Now, we had to do it properly.

As we started implementing OpenSAMM, we also dedicated a Security Officer (SO) in every one of our software development teams. This person had a lot of responsibilities towards our OpenSAMM process (which might be a different blog post) but had a very important role in our pull requests.

We decided that apart from the two standard reviewers selected from the wheel, every PR should additionally be reviewed by the author’s team’s security officer. The security officer was not supposed to review the code as a normal reviewer, but rather only look for vulnerabilities and poor security practices.

Summarizing our evolution

This journey led us to the current system:

  • Every line of code is reviewed before being merged into our master branch
  • Every line of code is reviewed by two people: an expert in the sub-system, and another developer
  • Every line of code is additionally reviewed by the author’s security officer, only focusing on vulnerabilities and poor security practices

So far, our code review process works great and helps us keep delivering excellence at a high pace, while both maintaining code quality and security.

If you haven’t seen the light of code reviews yet, you might be asking yourself just how important this is. Well, read on for fun, anecdotal, and self-deprecating evidence.

Bad reviews — how bad can it be?

Bad. One of the very first pull requests I reviewed at Greenbyte is a great example of just how this can come back and bite you in the ass. I had not yet fully internalized Greenbyte’s hard but fair PR policy, so when I saw something that worried me, I commented on it, but finally let it slide when the author objected to my opinion.

See, we had two methods for resolving data processing, I discovered. One that takes a client tag and properly resolves the correct service for the given client, and one that takes no parameter and tries to magically discover the appropriate client. Problem was, after a little of digging, I realized that the parameterless function in fact was only valid in a few special scenarios, and knowing which those were was more or less specialist knowledge. So I asked that we rename the method, to reflect that it is special and that its use should be restricted. Author complained and I didn’t want to give him a hard time, so I backed down.

I bet you’re already feeling just how much that was going to come back and bite us in the ass, aren’t you? Fast forward six months. We detected this obscure error, where some of our caching jobs would fail, and had to be re-queued manually. Since we hate manual tasks at Greenbyte, we set out to try and solve it. Four programmers spent about one man-month together trying to find the problem, with no luck at all. That’s when one of our senior coders had a breakthrough, he discovered that one of our junior developers had accidentally used the parameterless service resolve in the wrong scenario. That was it. If I had stood my ground, I would have saved the company a lot of time that could have been spent on doing something much more important.

Don’t skimp on reviews

Trust me, you’ll regret it.

--

--

Greenbyte Technology Blog
Greenbyte TechBlog

We write about how an aggressive start-up within renewables manages their technology stack and pioneers artificial intelligence in renewable energy.