More unittest Features

Josiah Ulfers
New Light Technologies, Inc. (NLT
3 min readJan 27, 2023

Last time, we moved the tests for our test-driven factorial function from basic assert statements to being run by Python’s unittest module:

import unittest
from factorial import factorial
class TestFactorial(unittest.TestCase): def test_base_case(self):
self.assertEqual(factorial(0), 1)
def test_first_recursive_case(self):
self.assertEqual(factorial(2), 2)
def test_recursing_further(self):
self.assertEqual(factorial(5), 120)
unittest.main()

We saw that writing our tests this way helped us see precisely where we made mistakes when we introduced regressions.

Today, two more essential test runner features: the ability to target one or a few tests out of a large test suite and helpers for reliably testing errors.

Targeting Tests

If one out of many tests fails, not only does unittest show us exactly which test failed, it lets us re-run the failed test alone. Thus, we can check our fixes faster than the time needed to run all the tests. This doesn’t matter when all our tests run in a thousandth of a second, but on the larger, slower, test suites needed for real applications, we can save a good deal of time by running only the tests we need for the part of the code we’re editing.

When operating this way, unittest imports our test file as a module, so we guard the call to unittest.main():

class TestFactorial(unittest.TestCase):...if __name__ == '__main__':
unittest.main()

Now we can run it either as a script, as we’ve been doing, or by invoking the unittest module. Invoking the unittest module lets us run a particular test module, test class or an individual test:

$ python -m unittest test_factorial.TestFactorial.test_base_case
.
--------------------------------------------------------------------
Ran 1 test in 0.000s
OK

As our program grows, we gain many test files and when we want to run them all, unittest can discover them for us:

python -m unittest
...
--------------------------------------------------------------------
Ran 3 tests in 0.001s
OK

Testing For Errors

Testing libraries such as unittest also give us convenient ways to test for errors.

Recall how we previously tested that our factorial function throws an error when given a negative number. It was a fragile way to write a test because, for example, we could forget the “assert False” line and accidentally write a test that tests nothing at all:

try:
factorial(-1)
# Forgot to assert False here
except ValueError as e:
pass

If we were writing tests first, we ought to have noticed the error when we made it, because the test passed when we expected it to fail. But we can also defend-in-depth against the tricks our own brains play on us, so we lean on unittest to give us a simpler and more fail-safe way to test when our programs should throw errors:

class TestFactorial(unittest.TestCase):    def test_lt_zero_error(self):
with self.assertRaises(ValueError):
factorial(-1)

Running Automatically

We’ve now completely rewritten our factorial tests to use the unittest library and separated them from the code that computes the function.

Such separation is essential for larger applications, but does come with a major drawback when compared to our original assert statements: nothing ensures that the tests run. When we stuck assert statements in our application code, anybody importing our factorial module necessarily ran our tests, but when we stuck our tests elsewhere, it became easy to ignore them. In most projects, therefore, we also need a robot to run our tests automatically and make the failures visible: that’s a big part of what “continuous integration” tools are designed to do. But, as ever when we hand our jobs to robots, there are traps. We’ll consider this and other real-world complications next time.

--

--