Image by Christos Georghiou licensed via Shutterstock

Testing Plugins

An excerpt from Python Testing with pytest by Brian Okken

The Pragmatic Programmers
5 min readMay 22, 2023

--

https://pragprog.com/newsletter/
https://pragprog.com/newsletter/

Plugins are code that needs to be tested just like any other code. However, testing a change to a testing tool is a little tricky. When we developed the plugin code in Writing Your Own Plugins, we tested it manually by using a sample test file, running pytest against it, and looking at the output to make sure it was right. We can do the same thing in an automated way using a plugin called pytester that ships with pytest but is disabled by default.

Our test directory for pytest-nice has two files: conftest.py and test_nice.py. To use pytester, we need to add just one line to conftest.py:

ch5/pytest-nice/tests/conftest.py

​ ​"""pytester is needed for testing plugins."""​
​ pytest_plugins = ​'pytester'​

This turns on the pytester plugin. We will be using a fixture called testdir that becomes available when pytester is enabled.

Often, tests for plugins take on the form we’ve described in manual steps:

  1. Make an example test file.
  2. Run pytest with or without some options in the directory that contains our example file.
  3. Examine the output.
  4. Possibly check the result code — 0 for all passing, 1 for some failing.

Let’s look at one example:

ch5/pytest-nice/tests/test_nice.py

​ ​def​ ​test_pass_fail​(testdir):

​ ​# create a temporary pytest test module​
​ testdir.makepyfile(​"""​
​ ​ def test_pass():​
​ ​ assert 1 == 1​

​ ​ def test_fail():​
​ ​ assert 1 == 2​
​ ​ """​)

​ ​# run pytest​
​ result = testdir.runpytest()

​ ​# fnmatch_lines does an assertion internally​
​ result.stdout.fnmatch_lines([
​ ​'*.F*'​, ​# . for Pass, F for Fail​
​ ])

​ ​# make sure that that we get a '1' exit code for the testsuite​
​ ​assert​ result.ret == 1

The testdir fixture automatically creates a temporary directory for us to put test files. It has a method called makepyfile() that allows us to put in the contents of a test file. In this case, we are creating two tests: one that passes and one that fails.

We run pytest against the new test file with testdir.runpytest(). You can pass in options if you want. The return value can then be examined further, and is of type RunResult.

Usually, I look at stdout and ret. For checking the output like we did manually, use fnmatch_lines, passing in a list of strings that we want to see in the output, and then making sure that ret is 0 for passing sessions and 1 for failing sessions. The strings passed into fnmatch_lines can include glob wildcards. We can use our example file for more tests. Instead of duplicating that code, let’s make a fixture:

ch5/pytest-nice/tests/test_nice.py

​ @pytest.fixture()
​ ​def​ ​sample_test​(testdir):
​ testdir.makepyfile(​"""​
​ ​ def test_pass():​
​ ​ assert 1 == 1​

​ ​ def test_fail():​
​ ​ assert 1 == 2​
​ ​ """​)
​ ​return​ testdir

Now, for the rest of the tests, we can use sample_test as a directory that already contains our sample test file. Here are the tests for the other option variants:

ch5/pytest-nice/tests/test_nice.py

​ ​def​ ​test_with_nice​(sample_test):
​ result = sample_test.runpytest(​'--nice'​)
​ result.stdout.fnmatch_lines([​'*.O*'​, ]) ​# . for Pass, O for Fail​
​ ​assert​ result.ret == 1


​ ​def​ ​test_with_nice_verbose​(sample_test):
​ result = sample_test.runpytest(​'-v'​, ​'--nice'​)
​ result.stdout.fnmatch_lines([
​ ​'*::test_fail OPPORTUNITY for improvement*'​,
​ ])
​ ​assert​ result.ret == 1


​ ​def​ ​test_not_nice_verbose​(sample_test):
​ result = sample_test.runpytest(​'-v'​)
​ result.stdout.fnmatch_lines([​'*::test_fail FAILED*'​])
​ ​assert​ result.ret == 1

Just a couple more tests to write. Let’s make sure our thank-you message is in the header:

ch5/pytest-nice/tests/test_nice.py

​ ​def​ ​test_header​(sample_test):
​ result = sample_test.runpytest(​'--nice'​)
​ result.stdout.fnmatch_lines([​'Thanks for running the tests.'​])


​ ​def​ ​test_header_not_nice​(sample_test):
​ result = sample_test.runpytest()
​ thanks_message = ​'Thanks for running the tests.'​
​ ​assert​ thanks_message ​not​ ​in​ result.stdout.str()

This could have been part of the other tests also, but I like to have it in a separate test so that one test checks one thing.

Finally, let’s check the help text:

ch5/pytest-nice/tests/test_nice.py

​ ​def​ ​test_help_message​(testdir):
​ result = testdir.runpytest(​'--help'​)

​ ​# fnmatch_lines does an assertion internally​
​ result.stdout.fnmatch_lines([
​ ​'nice:'​,
​ ​'*--nice*nice: turn FAILED into OPPORTUNITY for improvement'​,
​ ])

I think that’s a pretty good check to make sure our plugin works.

To run the tests, let’s start in our pytest-nice directory and make sure our plugin is installed. We do this either by installing the .zip.gz file or installing the current directory in editable mode:

​ ​$ ​​cd​​ ​​/path/to/code/ch5/pytest-nice/​
​ ​$ ​​pip​​ ​​install​​ ​​.​
​ Processing /path/to/code/ch5/pytest-nice
​ ​...​
​ Running setup.py bdist_wheel for pytest-nice ... done
​ ​ ...​
​ Successfully built pytest-nice
​ Installing collected packages: pytest-nice
​ Successfully installed pytest-nice-0.1.0

Now that it’s installed, let’s run the tests:

​ ​$ ​​pytest​​ ​​-v​
​ =================== test session starts ===================
​ plugins: nice-0.1.0, cov-2.5.1
​ collected 7 items

​ test_nice.py::test_pass_fail PASSED [ 14%]
​ test_nice.py::test_with_nice PASSED [ 28%]
​ test_nice.py::test_with_nice_verbose PASSED [ 42%]
​ test_nice.py::test_not_nice_verbose PASSED [ 57%]
​ test_nice.py::test_header PASSED [ 71%]
​ test_nice.py::test_header_not_nice PASSED [ 85%]
​ test_nice.py::test_help_message PASSED [100%]

​ ================ 7 passed in 0.57 seconds =================

Yay! All the tests pass. We can uninstall it just like any other Python package or pytest plugin:

​ ​$ ​​pip​​ ​​uninstall​​ ​​pytest-nice​
​ Uninstalling pytest-nice-0.1.0:
​ ​ ...​
​ Proceed (y/n)? y
​ Successfully uninstalled pytest-nice-0.1.0

A great way to learn more about plugin testing is to look at the tests contained in other pytest plugins available through PyPI.

We hope you enjoyed this excerpt from Python Testing with pytest by Brian Okken. You can continue reading here on Medium, or purchase a copy directly from The Pragmatic Programmers.

Book cover illustrated with a red space capsule / rocket

--

--

The Pragmatic Programmers
The Pragmatic Programmers

We create timely, practical books and learning resources on classic and cutting-edge topics to help you practice your craft and accelerate your career.