A sip of coffee with characterization testing

Pericles Theodorou
Carwow Product, Design & Engineering
3 min readSep 22, 2016

Testing is at the core of what we do in software. It serves many purposes: from validating the correctness of our programs to being a live documentation of the expected behaviour of our systems.

Recently, I found another way to use testing in order to increase my overall understanding and knowledge over a piece of code. Michael Feathers in Characterisation Testing, illustrates how this testing approach can be used to characterize the actual behaviour of some untested code and consequently protect it against uninteded changes.

Let’s jump into a real example I came across a few days ago.

number_to_currency is a Rails helper method that will format the number given the precision and will attach the currency sign as well. In this case it should use the EURO sign.

Now that we have the above information, the code looks very simple right? We pass a number to the method and we get back a currency formatted version. So before we start adding new code, let’s add a couple of tests first.

The first tests that come to mind are designed to validate the correctness of the program based on how we expect it to behave given our current understanding. All 3 tests pass at this point.

I wonder what happens if we pass a string that can be converted to a numeric object.

Excellent, the above test passes as well. Any Rubyist would expected that under the hood, Rails is calling one of the built-in conversion protocols such as to_i or to_f so it’s not a surprise.

Testing whether it converts strings to numeric gave me another idea. What if the string cannot be converted to numeric? Reading the code again, is not very obvious what the author originally intended. Maybe the Rails helper method can give us a hint but that case is not covert in the tests.

At this point we are entering the characterization testing cycle. I have exhausted all possible expectations of how I think the program should work and now I’m describing how the program actually works.

As we can see, the test’s expectation is a bit asburd as it should be because we are have no idea how the program should work. Running the test suite we get the following failing test

To be fair, I secretly expected it to return 0.0 or raise an error but both are just guesses. To paraphrase Feathers, does the above show a bug in the code? Not really. We literally have no idea.

We have now arrived to the most important aspect of characterisation testing: questions! Questioning an undocumented behaviour of the program will help us learn more about our codebase and the business rules behind it.

From asking questions about whether the above is a bug or not, I learnt two important things about German price formatting:

  • The euro sign comes at the end and also there is a space between the number. The equivelant version of $5 or £5 is 5 €.
  • Instead of 3%, the valid version is 3 %, so there is a space between the number and the percentage sign.

So the expectation one thousand € does not sound that far fetched after all. However, this is not a valid case so we have fixed the code since.

As Feathers says:

If you haven’t determined that the behavior you’ve uncovered is a bug, it’s often a good idea to leave the test in place

This makes a lot of sense because if users or even programmers became depended on that edge case that you haven’t thought of, “fixing” the code breaks something that in the eyes of the user was correct.

Interested in making an Impact? Join the carwow-team!
Feeling social? Connect with us on Twitter and LinkedIn :-)

--

--