Code reviews at Apolitical

Paddy Alton
Apolitical Engineering
5 min readDec 9, 2022

At Apolitical we recently spent some time trying to codify ‘the way we do things’. Out of those discussions have come various sets of principles and best practices.

Introduction

One area that sparked a deep and thoughtful discussion was code review — a crucial part of any engineering team’s workflow.

Schematic view of the stages in our develop-and-deploy process

We were motivated by questions such as

  • how can we get the most value out of code reviews ?
  • how can we avoid code reviews being a simple ‘rubber stamp’?
  • how can we make sure code reviews don’t slow us down?
  • how can we improve the developer experience of code reviews?

Following our discussion, we outlined these principles for code authors and reviewers.

Our principles

Reviewing code trumps writing code…

Writing new code is the easy part. Delivering it to production is harder. Until it’s delivered, it’s not doing anyone any good.

The ‘last mile’ of delivery is often the most painful. We can deliver more value to the organisation by prioritising review of work that’s been done over starting something new.

Receiving a review when what you wrote is still fresh in your mind also makes it easier to action the comments!

So: we should spend a lot of time reviewing code! Code review is a great thing to do following a meeting/lunch, or at the start or end of the day.

This will only work if we recognise good, prompt code reviews as critical contributions to the team.

What doesn’t get measured, doesn’t exist. Track how long people should expect to wait for review and agree what is good.

For example, we could aim to ensure no-one waits more than 24 hours for their code to be reviewed (for this purpose we pretend the weekend doesn’t exist, so code submitted at 5pm on Friday should be reviewed by 5pm on Monday).

…but reviewing code is writing code

Reviewers should always go into review with the objective of improving what’s been submitted.

Reviewers should see themselves as collaborating on writing the code, not just rubber stamping it as ‘good enough’.

For juniors, improving a more senior person’s code may be too lofty an aspiration — but it’s still a good learning opportunity.

Comments should indicate severity…

The comments you leave in code review serve a number of purposes:

  1. does the code work?
  2. is the code efficient?
  3. is it easy to maintain?
  4. is anything missing?
  5. ask a question about the code
  6. knowledge sharing
  7. kudos

Some comments may highlight critical problems, others require no action, and most lie somewhere in between.

It is good to triage your comments for the author. In descending order:

  1. blocking (“I’m not going to approve this until you change it”)
  2. recommended (“if it’s too much right now, at least write a ticket to do it later”)
  3. not blocking (“this is more of a suggestion”) / question / did you know? / high five!

Seniors reviewing juniors should be especially mindful of this.

There is inherent, unspoken pressure to spend time making changes when someone with more information/experience suggests them. Juniors may not yet be equipped to make the judgement themselves.

…but all comments are suggestions

Responsibility for the code lies with the author, not the reviewer.

The reviewer does have a veto — they can withhold approval — but this should be used sparingly.

If there is a genuine dispute we should usually side with the author (because they have more context).

If it can be automated, it should be automated

We can make code reviews more useful by automating as many quality checks as possible. This allows reviewers to focus on adding value to the code.

Examples of checks that can be automated (e.g. via Git hooks) include

  • formatting/linting
  • static type checking
  • unit tests/test coverage

This doesn’t mean reviewers shouldn’t comment on code format etc. if it doesn’t meet our standards, but

  • where possible, built-in ‘suggestions’ should be used (so the author can simply accept the change rather than spend time interpreting the reviewer’s directives)
  • if reviewers are regularly commenting on the same sorts of simple problem, they should surface this and propose a way to automate it

Spotting whether something is missing (e.g. a version bump, changelog entry, or test case) is hard for reviewers!

We can use checklists for the author (e.g. in issue templates) to try and catch some of these things if we can’t check automatically.

Simple sanity checks are essential…

It should be rare for code to reach the QA stage and be straightforwardly rejected. This indicates a process problem.

Every piece of code is bespoke. The write → review → QA → deploy process is not a production line. It is costly for a person to orient themselves when they perform any of these steps.

If we pay that cost, only to reject code for a reason that could have been caught at an earlier step by someone already working with it:

  • that is a poor experience for the person rejecting the code, distracting them unnecessarily and for little return on the investment of their time
  • code may be bounced back to the author where this might otherwise have been avoided entirely

In short, it slows us down.

It is faster overall to spend more time on the earlier stages and to ensure later stages are as streamlined and valuable as possible.

Authors should make sure automated checks pass and follow any relevant checklists to avoid reviewers catching simple problems

Reviewers should check that front-ends render properly and that APIs don’t throw trivial errors before approving code

…and, especially, value your reviewer’s time

This advice sounds obvious, but it’s common to see authors treat their reviewers like personal quality assurance technicians. These authors make little effort to catch their own errors or to design their checklist to get merge requests ready for review.

Your colleague arrives at work each day with a finite supply of focus. If they allocate some of it to you, that’s time they can’t spend on their own work. It’s only fair that you maximise the value of their time.

Reviews drastically improve when both participants trust each other. Your reviewer puts in more effort when they can count on you to take their feedback seriously. Viewing your reviewer as an obstacle you have to overcome limits the value they offer you.

Authors should read and review the code changes themselves. Don’t just check for mistakes, imagine reading the code for the first time.

Conclusion

We hope that by codifying these principles we can continue to improve the way we do code reviews at Apolitical.

By publishing them, we hope to help other engineering teams improve their working methods.

How do these principles sound to you? Get in touch via the comments!

--

--

Paddy Alton
Apolitical Engineering

Expect articles on data science, engineering, and analysis.