How To Write Tests

How do I write unit tests, and where do I even start?”, “How do I make sure I don’t miss anything?”, “I saw some examples of test suites in your other code bases, but I’m not sure how to set up my own tests, how do I do it?” — those are familiar questions, aren’t they? You either had them yourself or heard them (or both). I myself definitely was asking these questions at a certain point, so I spent a bit of time studying software testing. In this post, I’ll try to summarize everything I have learned about testing so far — from books, testing courses, experience, and my colleagues.

Testing, just like coding, starts with the requirements (or specification). What is this thing supposed to do? How is it supposed to do it (the how doesn’t refer to the implementation details, but rather either to the user experience, or the way parts of the system work together)? Those need to be written down (nothing fancy, bullet points in a note taking app will do — I actually use paper journals), and consulted with during development and writing tests. An enormous help are decision tables, which help you document what needs to be happening under which conditions (in many cases it’s very difficult to hold multiple conditions in your head, and very easy to miss something).

Test-driven development has been very popular for quite a while now. I personally didn’t find it terribly helpful when dealing with unfamiliar things — whether you’re new on a project, or you’re new to the technology, or you don’t really know the domain all that well etc. I was once supposed to implement an API endpoint for uploading files to AWS S3 using multipart formpost request in Dropwizard framework — I knew nothing about any of the three, so there was no point for me in starting with the tests. In order to write a test, you need to know how it’s working in the first place. So instead of writing unit/integration tests, I was performing functional white-box testing while developing — something I still find most helpful when dealing with the unfamiliar. Writing actual unit tests once the work is finished is then easy: you just code the test cases you used for the functional testing. I guess you could call it “test-driven development” if you squint hard enough.

A side note related to speed of your work, and something I learned from experience: Don’t ever hesitate to spend some time planning your work, writing down requirements and putting together your decision tables when you’re about to embark on a more than a tiny project or when you deal with the unfamiliar or the unknown. In programming, things tend to take longer than was expected. Armed with the plan and the documentation, you’ll be able to finish your work faster than you would otherwise, or return to the work quicker after having been pulled to another project; also, writing tests will be much easier.

Back to writing tests, it’s a good idea to start with positive test cases (meaning, test cases that assert that your thing does what it’s supposed to do if the inputs are sensical — i.e. what you would expect them to be). Then you need some realistic negative test cases (meaning, test cases asserting on how your thing is supposed to behave if the inputs are nonsensical — which they are surprisingly often, especially if the source of the input is a human). These are harder to come up with (especially in the case of the human input — programs’ behavior is easier to predict), but you can reflect on your experiences as a user: what error messages have you gotten from a similar thing? Why did you get them? What went wrong? What can easily go wrong? What if the person has malicious intent?

Which brings me to a very important concept in software testing called equivalence partitioning. Equivalence partitioning means dividing the set of possible inputs into subsets of inputs that would produce similar results and are, therefore, equivalent to each other in that sense. You can immediately see why we need that technique: there’s an unbounded set of possible inputs, but you don’t need to write an unbounded set of test cases. All you need is a single test case for each partition, and two test cases for each boundary between the partitions (so-called “corner cases” or “edge cases”). If you’re in a hurry, you can probably just write n of the boundary test cases (where n is the number of partitions — I mean, all the elements of the partitions are equivalent, aren’t they?) and call it a day.

To give you a simple example on equivalence partitioning: imagine, if you will, a form field for entering a month. Requirements: format — two digits, padded with zero, no default value, incorrect input should not be allowed. What are the partitions in this case? Three: a set of integers (−∞; 0], a set of integers [1; 12], and a set of integers [13; +∞) (it might also make sense to consider the fourth partition of non-numerical characters). There are two boundaries. Inputs around the first boundary would be 0 and 1; inputs around the upper boundary would be 12 and 13. You need to write test cases for 0, 13, and either 1 or 12. Testing for padding with a zero: two partitions — single-digit numbers and the rest of the numbers in [1; 12]. One test case for each (probably 9 and 10). Finally, some sort of assertion on the absence of a default value somewhere among those test cases. The above is called boundary analysis.

You probably noticed that tests can be a form of documentation of what the code is supposed to be doing. (Actually, I think that’s the primary function of tests. I like the way Rich Hickey looks at tests, when talking about “guardrail programming” — watch until 17:13. For the same reason I say: when writing tests, don’t be afraid to miss something… because you will.) It’s often the more up-to-date form of documentation, because for every change in requirements there will be a change in tests, but not necessarily in the documentation (especially when done in a hurry). Often, the documentation is simply not good enough (ambiguous, difficult to read and/or understand, difficult to find, incomplete, completely absent etc.), so often other programmers will rely, among other things, on your tests to understand your code. So you need to write tests for people to read.

What does it mean? For example, if you’re writing tests for a front-end component, don’t write assertions like “Renders correctly”. What does correctly mean? Assertions like that don’t bring any light into the world. Be specific: “Renders with no default value” etc. At the same time, you want broad assertions: it might be a good idea to assert on the whole thing rather than its part. For example, if the function returns an object — assert on the whole thing, don’t pick properties one by one.

Speaking of assertions, another important point is: assertions must be made in the language of the domain, not in the language of… oh, I don’t know, arrays having certain number of elements, for example. It should be easy to understand the why of the array having certain number of elements, and what the business need for that requirement was (or what was the bug that was fixed). Who knows, maybe you’re working on a feature and struggle to get that test passing, but that business need is not even there anymore.

I heard from a number of developers I admire that it’s best to use the smallest possible test. I personally favor integration tests where you can test several things at once, and not just things separately but the whole flow — but those are usually difficult to set up and change, it may be difficult to locate the exact reason for the test failure, and they can be expensive to run on every commit. I do think that smallest possible test is a good rule, as long as it’s the largest possible chunk of a public API.

I like the idea of testing public interfaces only. Implementation details can be left alone (meaning, if your interface function uses some utility function — you don’t need to test the utility function. How the interface function arrives at the result is not important, as long as it conforms to the requirements, i.e. works correctly.) Like I said — testing and coding start with the requirements, which come from the domain and are expressed in the high-level language of the domain problem you are supposed to solve. Besides, the implementation details can be changed, but public interfaces are a commitment. Having bazillion of tests in the code base is not always good. Better to have less, but covering all the important test cases and requirements.

Finally, segregate fixtures (fixtures are the dummy inputs and dummy starting state you use in your tests). That will make the tests easier to change. The functions that set things up for the tests are typically written last. First you write your tests as if you have everything you need, and then you take a look and see what variables are undefined and how to define them. Don’t forget to tear things down and start every unit test tabula rasa or always with the same state — otherwise you might end up with really flaky tests.

I hope this was helpful. If you are interested in learning more about testing, I recommend researching resources on functional testing — understanding the nuts and bolts of the testing profession really helps here. Besides, developers do an awful ton of manual and automated testing themselves (especially in places that don’t hire actual testers). You can skip parts on the tester-specific documentation (test plans, test design etc. — although those might come in handy when writing acceptance tests for the system)(ability to write a good bug report is good skill for a developer to have too) or parts you already know. Unfortunately, I cannot recommend anything specific at the moment because the resources I used are in Russian, but as I encounter good things in English, I’ll add them here.