How much do we test a spike?

Gareth Bragg
Ingeniously Simple
Published in
2 min readDec 4, 2017

This post is part of a series on Building Features with Spike & Stabilise.

People generally expect only fully-tested code to go to production. Spike and stabilise means we defer rigorous testing until later. How do we make sure we don’t ship shit?

Four Eyes Are Better Than Two

Redgate have a long and healthy relationship with peer review. It’s a huge part of our quality practices, and different teams go about it in different ways.

Some teams religiously pair on any coding task, with engineers holding each other to account for what they’re writing.

Others prefer explicit code review, normally via GitHub Pull Requests.

Either approach leads to people asking healthy questions of our work, often catching important bugs early.

Coverage Comes Later

We don’t care about test coverage when spiking. That is left until we stabilise. While spiking we only write checks that will make delivery quicker or easier, such as:

  • Guiding implementation of complex behaviour
  • Documenting key assumptions
  • Protecting an important boundary within the system

Crucially, we need the spike to be easy to change, based on whatever we learn from it. That means there are plenty of checks we could put in place, but choose to leave. Typically they include:

  • Comprehensive checking of expected behaviour
  • Checking edge cases
  • Integration with other services/systems

Level Up With Experience

We value inspection of running software to understand the actual experience of using a new feature.

It’s common for teams to swarm together, often with our customer support team, in a group testing session. These sessions yield rapid, first-hand experience of trying to make use of a new product capability.

Normally an intense hour-long session, we can get some brutal feedback on how good or bad a new feature is to use. That helps us find any rough edges that need to be rounded off before we’re ready for real user feedback.

--

--