No fuss, no ego: Code Reviews done right

Robin Eisenberg
Botify Labs
Published in
10 min readApr 11, 2018
Our goal is to achieve 0 WTFs/min

Here at Botify we’ve been reviewing code since the beginning of our journey, and we’ve learned a lot along the way. After four years of changes, we believe we’ve landed on a code-review method that works (for us at least), and we thought we’d share and discuss it with the world.

Disclaimer: we do not claim that this is the ‘right’ way to review code as a team. This is a method that works for us, for our team size, and our stacks and tools. Mileage may vary.

TLDR;

For effective, fast, and efficient reviews amongst an engineering team:

  • Leave your ego at the door
  • Have a team mindset
  • Align the team on why and how reviews are performed
  • Set review rules and follow them
  • Discuss changes to the rules regularly

Why?

Before delving into how we review code here at Botify, I want to talk about why we review code. It is paramount in an engineering team that the whole team be aligned on why they review code, and the effect it has on the team and on the codebase.

Betterment of the product

Reviews help us in our quest to provide the best possible product for our customers.

As a team, all Botifyers are extremely customer-centric: it is one of our core values and permeates throughout the whole company, from the Engineer performing a code-review to the CSM onboarding a new customer.

In that mindset, the first reason why we review code is for the betterment of the Botify product. A good code-review is like a time-machine: it allows our team to catch bugs before they even arise, and therefore before the customer sees them. We review code first and foremost because it makes the product better, and in turn because it makes the customer’s experience on Botify better.

Betterment of the code

Reviews make our code stronger

Some review comments aren’t about function — they are purely about code, and code architecture. Reviews allow something that we are lucky to have in our profession: a second set of eyes on our work.

In that sense, we review code so that we can catch bugs, discrepancies, edge-cases, DRY cases, and architectural mistakes before they get to production. A “Review KO”, in a sense, is a headache from the future that we’ve stopped before it even happens. Reviews allow everyone to have a say in our codebase, since everyone will have to maintain it. We review because it makes our production code stronger, less prone to mistakes, and more in line with what the whole team wants to do.

Betterment of the team

Reviews power-up our team

A very important aspect of reviews is that they allow for the diffusion of knowledge within the team. When maintaining a large codebase, reviews allow everyone to have a look at every part of the codebase, and therefore to learn about parts of the code that they are not working on. The more experienced developers get to share their knowledge just by doing their job, and in return they also learn new techniques from the more junior developers always trying the latest tech. Everyone benefits from this sharing of knowledge — reviews help our team get better at what we do.

How?

There are a few prerequisites required for a well-oiled reviewing machine. They are:

A Team mindset

This is probably the most important part of why our reviews work, so follow closely. They are three simple rules that all engineers in the team must always keep at heart, and they are at the core of our review method:

  1. Egoless — Leave your ego at the door
  2. Goodwill — We are all in the same boat
  3. We review code to make the product, the code, and team better

These rules are what allows us to have reviews that focus on the right problems.

Leave your ego at the door” simply means that we review code, not people. It means that code reviews will always be about the code, and never about the people writing it. It requires developers to put their ego aside when reviewing and being reviewed. It hinges on our engineer’s individual belief that their team members are good at what they do, always, and that a mistake (or near-sightedness in a solution) can happen to anyone, from the most experienced Rockstar developer to the Junior engineer just starting out. It rests on the belief that code is not an extension of the person writing it. This is the most important part of our method.

We are all in the same boat” simply means that we ask engineers to have goodwill towards their fellow team members, and goodwill towards the product. That means that it is their job, when submitting a PR to:

  • Limit the risk of the code in terms of the product
  • Present their PR in a readable way to their coworkers

We consider that reviewees must always present a clean, commented, and well committed PR — failure to do so is abrasive for the team and the team members who are going to review the PR. In that sense, we ask that developers review their own PRs before submitting them, rewriting the git history if necessary, so that they can present something readable and understandable by the whole team.

And finally, as described above, we as a team know why we are reviewing code, and trust that a review comment is made in the hope of bettering either the product, the codebase, or the team.

A Review Convention

We believe that it is important to have a guideline to be followed by engineers in the team when reviewing code. Here’s ours for an example:

The Review Convention:

Code reviews are about code, not people.

Review comments don’t always have to be about betterment of the reviewed code, they can propose improvements in code style, comments, documentation, or tests.

As a reviewee:

- When submitting a PR, check that your PR is valid, does what it should do, passes tests (CI), and does not contain any code that is not related to your PR

- Never take a PR comment personally (PR reviews are about code, not people).

- All constructive comments are valid, regardless of the rank of the developer making them. Address all comments, either in the code or in the PR comments.

- If you don’t agree with one of the comments made to your code, discuss it constructively with your peers. Involve a third party if the discussion does not lead to a solution.

- When addressing review comments, create one commit per fix, not a big commit that ‘Fixes all review comments’.

As a reviewer:

- Comments should always be constructive: never put a comment that does not indicate the problem and how to fix it. If the comment requires discussion, leave it open for discussion.

- Comments should always be about the codeno personal vendettas.

- As a general rule a question should not be a KO reason — if you have comments, they don’t need to be questions but affirmations.

- Never ask a question that can be answered by simply reading the code: this just shows you have not read/understood the code in question

- Sometimes problems in the code are not functional. Valid reasons for KO: Code does not work, or does not work as intended (edge cases); Code does not follow the code style of the repo; Code patterns do not follow the rest of the repo; Code does not have enough comments (things are too hard to understand); Code is missing tests; Code is missing documentation (if needed); PR format is not valid; Branch format is not valid; Code does not pass Lint; Code does not pass Tests.

This document is read by all newcomers and all rules described here are at the core of how we review each other’s code.

Clear statuses for all PRs

It’s important for everyone in the team to be able to know in what state a PR is at any time. POs and the QA team need to know when a task is started, in review, reviewed, and ready to be tested, so they can prepare their workflows around the development team. We simply use Github tags to manage everything, and all PRs follow a convention on setting or changing their status:

Each tag on a PR means something to someone: we speak of ‘hot’ or ‘cold’ states for the person looking at the PRs, representing whether they should do something (review, fix reviews, or merge). Everything has its place, and everyone knows when they intervene on a PR. To ease the use of such a system, we scripted a lot of the transitions on these tags to trigger JIRA, CI/CD, or QA actions.

Script everything

Just like for linting, in order to reduce noise and overhead work for engineers, we’ve scripted a lot of interactions between reviews and our task-tracking tools. Review actions in our Github trigger issue transitions in JIRA, and JIRA comments or transitions trigger Github review status changes.

When a PR is created, our Github bot links the PR to the JIRA issue, in JIRA and in Github:

When a PR changes states, the JIRA issue is transitioned accordingly by the bot:

We even have the bot automatically deploy a test environment after the CI build, for automated testing.

We believe that anything that can be automated in a team’s process should be automated, and we’ve spent time creating tools that make our lives easier. As a side note: the tool we’ve built to handle our automated transitions uses @serverless, via AWS APIGateway and Lambda, and was an opportunity for one of our engineers to do a little R&D on the serverless ecosystem, and share it with the team (in a Chapter meeting) — so it’s time we’ve spent but the ROI is clear.

Linters and a styleguide tested on CI

At Botify we try as much as possible to reduce noise in PR Reviews, so we can make them more efficient. Part of that is solving as many discussions as possible by tests and CI, so that in our review, we can focus on the product and how it works more than just the code and how it looks. As a team, we agreed upon a styleguide, created rulesets on our linters to automatically lint the code according to our styleguide, and we run a lint check on CI when running tests. A PR that fails to follow the Botify styleguide therefore does not pass tests, and style discussions are entirely skipped. Engineers waste less time reviewing code style, and more time reviewing code function.

Regular discussions about the stack and how we do things

Once we’ve set up a styleguide and linter checks, we make sure to have regular discussions about it. The languages we use are evolving, and we are learning from our experience, therefore it’s important to have a regular discussion about what needs changing in the styleguide. More than just linting, it’s paramount to have regular discussions about “How we do things” as a team.

Here at Botify, we organize what we call “Chapter Meetings” every month, where developers working on a shared codebases get together to have a discussion. We share experiences — mistakes, discoveries, research, or conclusions — from the past month and discuss to ensure that every engineer in the team is aligned with the common mindset. This allows us to have less surprises when reviewing, as engineers are already familiar with the approach used in the PR.

Peer Reviews when necessary

Sometimes preparation, automatisation, and goodwill is not enough. Some PRs are just too big, too architectural, or too complex to be reviewed alone. In this case we have side-by-side peer reviews with the developer who made the PR. We try to make each PR readable individually as much as possible, but know to do peer reviews when necessary.

For big architectural changes that concern everyone, we organize ‘Review Parties’ where we get the whole team together in a meeting room, and the developer who made the PR presents their code to the team in a live-review on a big screen.

It’s important to have strong processes in place for the 99% of PRs that need it, and to have the agility to bypass them when need arises for the 1%.

Final Thoughts

All-in-all, we believe that most of what is required for good reviews has been said up to here. Once you’ve set up a common mindset that is pure, and provided the tools and structure as described above, reviews become more focused, with less friction, and more pertinent.

And the results are staggering! Based on statistics over the last 30 days:

  • 84% of PRs get reviewed within 24hrs
  • 46% of PRs get KOed — these are bugs or headaches we caught before production!
  • 29% of PRs get reviewed by more than one developer
  • 11% of PRs need several passes for reviews (multiple KOs)
  • 31% of PRs are presented as to be reviewed commit-by-commit

How are you running your code reviews? Our method works for us and our team, but it does not apply to everyone. What’s different with your team ? Feel free to chime in in the comments!

Interested in joining us? We’re hiring! Don’ t hesitate to shoot me an email if there are no open positions that match your skills, we are always on the lookout for passionate people.

--

--