Refactoring Chapter 4 — Building Tests

Rafael Melo
4 min readJan 6, 2020

--

Writing good tests increases my effectiveness as a programmer. This was a surprise for me and is counterintuitive for most programmers — so it’s worth explaining why.

Every programmer can tell a story of a bug that took a whole day (or more) to find. Fixing the bug is usually pretty quick, but finding it is a nightmare. And then, when you do fix a bug, there’s always a chance that another one will appear and that you might not even notice it till much later. And you’ll spend ages finding that bug.

After all my tests are fully automatic and checking their own results, it is easy to run tests — as easy as compiling. So I started to run tests every time I compiled.

If I added a bug that was caught by a previous test, it would show up as soon as I ran that test. The test had worked before, so I would know that the bug was in the work I had done since I last tested. And I ran the tests frequently — which means only a few minutes had elapsed. I thus knew that the source of the bug was the code I had just written. As it was a small amount of code that was still fresh in my mind, the bug was easy to find. Bugs that would have otherwise taken an hour or more to find now took a couple of minutes at most.

I had a powerful bug detector.

As I noticed this, I became more aggressive about doing the tests. Instead of waiting for the end of an increment, I would add the tests immediately after writing a bit of function. Every day I would add a couple of new features and the tests to test them.

There is always a danger that by trying to write too many tests you become discouraged and end up not writing any. You should concentrate on where the risk is. Look at the code and see where it becomes complex. Look at a function and consider the likely areas of error. Your tests will not find every bug, but as you refactor, you will understand the program better and thus find more bugs. Although I always start refactoring with a test suite, I invariably add to it as I go along.

Tips on Writing Tests

  • When I write a test against existing code I like to see every test fail at least once when I write it. My favorite way of doing that is to temporarily inject a fault into the code.
  • “Never refactor on a red bar,” meaning you shouldn’t be refactoring if your test suite has a failing test. “Revert to green” meaning you should undo recent changes and go back to the last state where you had all-passing test suite (usually by going back to a recent version control checkpoint).
  • Look at all the things the class should do and test each one of them for any conditions that might cause the class to fail. This is not the same as testing every public method, or every function. Testing should be risk-driven; remember, I’m trying to find bugs, now or in the future. Therefore I don’t test accessors that just read and write a field: They are so simple that I’m not likely to find a bug there. My focus is to test the areas that I’m most worried about going wrong. That way I get the most benefit for my testing effort. It is better to write and run incomplete tests than not to run complete tests.
  • If there is some duplication between tests you could be tempted to do as you would on regular code. For example

Raising the constant to the outer scope.

Never do this. It will work for the moment, but it introduces a horrible instance that is one of the worse bugs in testing — a shared variable which causes tests to interact. The const keyword in JavaScript only means the reference to asia is constant, not the content of that object. Should a future test change that common object, I’ll end up with intermittent test failures due to tests interacting through the shared variable, yielding different results depending on what order the tests are run in.

A better option is to use something like this:

The beforeEach clause is run before each test runs, clearing out asia and setting it to a fresh value each time. This way I build a fresh variable before each test is run, which keeps the tests isolated.

Given I run the setup code in beforeEach with every test, why not leave the setup code inside the individual “it” blocks? The presence of the beforeEach block signals to the reader that I’m using a standard fixture. You can then look at all the tests within the scope of that describe block and know they all take the same base data as a starting point

Much More Than This

The best measure for a good enough test suite is subjective: How confident are you that if someone introduces a defect into the code, some test will fail? This isn’t something that can be objectively analyzed, and it doesn’t account for false confidence, but the aim of self­-testing code is to get that confidence. If I can refactor my code and be pretty sure that I’ve not introduced a bug because my tests come back green — then I can be happy that I have good enough tests.

Also, remember that it is possible to write too many tests. One sign of that is when I spend more time changing the tests than the code under test — and I feel the tests are slowing me down. But while over­-testing does happen, it’s vanishingly rare compared to under-­testing.

--

--