Let’s face it, tests can be boring to write. You write some awesome code, then you run a command or you click around in a browser, and your program works! But ugh, now you have to write tests. What a drag. Everyone’s been there, and most of us power through that pain because of a shared certainty that tests are important. We even adopt strategies like test-driven development to ease the pain and ensure we actually write our very-important-tests.
But why are tests important?
Because your boss says they are? Because developers at Google and Facebook and Amazon write tests? Because that’s just what we do? Not good enough reasons, I do declare.
Tests are primarily important for an obvious reason: they verify that our code works as intended. But even that isn’t a good enough reason. We can manually run a command or click in a browser to verify that our code works! If we write a piece of software that never has to be changed, there’s absolutely no reason to write tests, because we can just verify that it works with the best test of all: our own senses.
I’m sure you noticed the key word in that last sentence though: change. Every meaningful piece of software ever written needs to change. Bug fixes, new features, refactors… Which brings us to the real reason we write tests.
Tests are important because they verify that our code works as intended even after we change it. Retesting everything manually just doesn’t scale.
A whole host of you are probably screaming at your screens right now: “Duh!!! Why did it take you like a bajillion paragraphs to get to that point???” Hear me out.
Ask yourself an important question now: “Are my tests verifying that my code works as intended even after I change it?” Your first response might be a confident yes!, but here’s why you might be wrong:
- Have you ever refactored code and had tests fail, even though the code is actually working fine?
- Have you ever refactored code and had tests keep passing, even though the code is now broken?
I dare say this has happened to everyone, and if it happens frequently and with large sets of tests, then your tests are not providing any value. They’re dead code you’re maintaining simply because “tests are important”. Tests should fail when your code is broken and pass when it’s working, not the other way around.
Let’s look at this concept in a real world scenario: Redux “ducks”. A duck is a collection of related action creators, reducers, and selectors, all bundled together in a single module or folder. (You can read more about the ducks concept in the ducks-modular-redux project.)
Hot Take™ #1: If you’re unit testing your actions creators, reducers, and selectors in isolation, you’re wasting your ducking time.
Why? Because these kind of tests will fail when your code is working and pass when your code is broken. They are forced to test implementation details of your Redux state, and don’t test the action/reducer and reducer/selector contracts that exist in your application. (See examples of these kind of unit tests here.)
Hot Take™ #2: Only write tests that test the entire “duck” by dispatching an action to a real Redux store and making assertions on values returned by selectors.
Why? Because these kind of tests are much more resilient to refactoring, and they test the action/reducer and reducer/selector integration points of your duck. (See examples of these kind of duck tests here.)
Still not convinced? Check out Top Hat’s Don’t Waste Your Ducking Time example project. It demonstrates — in detail, and with just the right amount of clickbait — how to apply the “duck testing” philosophy to not only action creators, reducers, and selectors, but also sagas.
Duck testing is to Redux ducks as Dawn dish soap is to real ducks. So go off into the world and write better Redux tests! Your brain, hands, and all aquatic fowl will thank you for your more maintainable, readable, and — most importantly — more changeable codebases.