Testing asyncio Python code with pytest

Python is a vital part of the iGenius coding life. Our backend team stars Dario and Matteo share some findings from one of their latest tests.

Since the inclusion of asyncio in the standard library with version 3.4, the Python ecosystem has seen a growing interest in asynchronous programming. The introduction of the new async/await syntax in Python 3.5, which makes working with coroutines more explicit and straightforward, has pushed this trend even further.

An increasing number of libraries and utilities was consequently born on top of asyncio (e.g. aio-libs) and new asynchronous frameworks, which fully leverage the new syntax, are starting to emerge and gain traction (take a look at curio and trio if you are curious).

The main reason behind this interest is clear: when dealing with io-driven programs, a single-threaded, nonblocking IO framework is often a safer and more performing alternative to classical multithreading-based concurrency, with the latter being even more limited in Python due to the famous GIL constraint, at least in the canonical CPython implementation.

With our code ending up running on a single thread, virtually all those nasty bugs related to data race conditions, notoriously hard to find and replicate, are now gone. Still, we should not forget that untested code is broken code. So, let’s dive in testing Python asyncio code with pytest!


A simple example

Let’s start with a very simple example:

The say coroutine will yield whatever value is passed as what parameter after a given number of seconds, determined by the value of the when parameter. If we try to execute it, the program will output what after 3 seconds and then exit. Let’s write a test to assert that say behaves as expected:

$ pytest test_say.py
============ test session starts ============
platform darwin — Python 3.6.2, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /Users/igenius/pytest-asyncio, inifile:
collected 1 item
test_say.py . [100%]
=========== warnings summary =============
test_say.py::test_say
/Users/igenius/pytest-asyncio/.pyenv/versions/3.6.2/envs/pytest-asyncio/lib/python3.6/site-package
s/_pytest/python.py:147: RuntimeWarning: coroutine ‘test_say’ was never awaited
testfunction(**testargs)
 — Docs: http://doc.pytest.org/en/latest/warnings.html
============= 1 passed, 1 warnings in 0.01 seconds ============

At a first glance, the test seems to have passed. But the suspicious RuntimeWarning about a never awaited test_say coroutine on the bottom should make us doubtful with respect to its actual outcome. To assert that the test is working as expected we can make it fail first and verify that it fails when executed.

If we try to execute it, the output is — unsurprisingly? — the same as the previous test, RuntimeWarning included:

test_say.py . [100%]
========== warnings summary ===========
test_say.py::test_say
/Users/igenius/.pyenv/versions/3.6.2/envs/pytest-asyncio/lib/python3.6/site-packages/_pytest/pytho
n.py:147: RuntimeWarning: coroutine ‘test_say’ was never awaited
testfunction(**testargs)
 — Docs: http://doc.pytest.org/en/latest/warnings.html
======== 1 passed, 1 warnings in 0.01 seconds =========

The experiment proved that our test is broken. And the reason why it is not working can be easily inferred by our old friend, you have already guessed it, RuntimeWarning.


Injecting the loop

The problem with our test is that the default test runner in pytest will execute every test it collects as standard python callables, hence the warning about coroutines not being awaited. So, we need a way to tell pytest to use an event loop for executing tests defined as asyncio coroutines. There are different ways to accomplish this. We could, for instance, instantiate an event loop in our code and inject it into tests that need to execute a coroutine using a pytest fixture:

In the code above, the event_loop fixture takes advantage of the setup/teardown capabilities offered by pytest to get a reference to asyncio’s default loop. This is then injected to test cases requiring it and finally closed after their execution. You may also notice that test_say is a synchronous function that executes asynchronous code by submitting it to the event loop it receives as input.

This is the output we obtain when we run this test:

...
collected 1 item
test_say_loop_fixture.py F                                                                   [100%]
============= FAILURES =========
_____________ test_say _________
event_loop = <_UnixSelectorEventLoop running=False closed=False debug=False>
def test_say(event_loop):
expected = 'This should fail!'
>       assert expected == event_loop.run_until_complete(say('Hello!', 0))
E       AssertionError: assert 'This should fail!' == 'Hello!'
E         - This should fail!
E         + Hello!
test_say_loop_fixture.py:18: AssertionError
---------------------------------------- Captured log setup ----------------------------------------
selector_events.py          65 DEBUG    
Using selector: KqueueSelector

The test failed as expected! Just replace the value of the expected result with expected = Hello! to make the test finally pass:

...
collected 1 item
code/test_say_loop_fixture.py .                                                              [100%]
============ 1 passed in 0.01 seconds ===========

The main drawback of fixture-based approach is that every test that wants to execute a coroutine is required to explicitly submit it to the event loop.

A more flexible way to test asynchronous code is to modify the test runner in order to recognize tests defined as coroutines and execute them as asyncio tasks.

Luckily, there is a pytest plugin, pytest-asyncio, that already does this for us! This plugin offers a pytest.mark.asyncio decorator to notify the test runner to treat the decorated test as a coroutine to be executed by an event loop. So, let’s reconsider our first, naive attempt in the light of this new discovery:

The test code is much cleaner and when we run it no more runtime warnings will pop up this time!

$ pytest test_say_pytest_asyncio.py
======================================= test session starts ========================================
platform darwin -- Python 3.6.2, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /Users/igenius/pytest-asyncio, inifile:
plugins: asyncio-0.8.0
collected 1 item
test_say_pytest_asyncio.py .                                                                 [100%]
============== 1 passed in 0.01 seconds ==============

Mocking coroutines

Let’s now consider a more complex example:

The stream_of_consciousness coroutine prints out a bunch of sentences passed as input after randomly selected intervals, using the say coroutine defined in the previous example. We want to test it without depending on the real implementation of say, because its execution would take a large amount of time otherwise. A possible way to fake the implementation of a function for testing purposes is to patch it using the mock module provided by unittest. So let’s try it:

If you try to run the test above this is what is going to happen ( — tb=native has been used to limit the output):

$ pytest test_stream.py --tb=native
================== test session starts ==================
platform darwin -- Python 3.6.2, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /Users/igenius/pytest-asyncio, inifile:
plugins: asyncio-0.8.0
collected 1 item
test_stream.py F                                                                             [100%]
================== FAILURES ===================
____________________________ test_stream_of_consciousness_call_say_func ____________________________
Traceback (most recent call last):
File "/Users/igenius/pytest-asyncio/test_stream.py", line 11, in test_stream_of_consciousness_call_say_func
await stream_of_consciousness(sentences)
File "/Users/igenius/pytest-asyncio/test_stream.py", line 14, in stream_of_consciousness
for next_sentence in asyncio.as_completed(coros):
File "/Users/igenius/.pyenv/versions/3.6.2/3.6.2/lib/python3.6/asyncio/tasks.py", line 433, in as_completed
todo = {ensure_future(f, loop=loop) for f in set(fs)}
File "/Users/igenius/.pyenv/versions/3.6.2/lib/python3.6/asyncio/tasks.py", line 433, in <setcomp>
todo = {ensure_future(f, loop=loop) for f in set(fs)}
File "/Users/igenius/.pyenv/versions/3.6.2/lib/python3.6/asyncio/tasks.py", line 526, in ensure_future
raise TypeError('An asyncio.Future, a coroutine or an awaitable is '
TypeError: An asyncio.Future, a coroutine or an awaitable is required
------------------ Captured log setup ----------------
selector_events.py          65 DEBUG    Using selector: KqueueSelector
selector_events.py          65 DEBUG    Using selector: KqueueSelector
=============== 1 failed in 0.03 seconds ==============

The error is self-explanatory: ensure_future, which is called internally by asyncio.as_completed, expects an awaitable object to be passed as input.

By default, patch replaces the target object with a MagicMock instance, which is not awaitable. To fix this error, we need to explicitly tell patch to use a mock instance that acts as a coroutine, which we can conveniently find in asynctest package. In the snippet below, we have replaced in patch function the standard MagicMock with a CoroutineMock that, when instantiated, returns a coroutine object to be awaited.


Conclusions

Testing asynchronous code requires a test runner capable of supporting a completely different execution paradigm. In a nutshell, it must first instantiate a loop, then send test cases to it to be executed and finally wait for their results.

This can be a very complex task, without the proper tools. But thanks to its modular nature, pytest can be easily extended to provide our test cases with support for asynchronous execution, and there are even plugins that already do this for us.

pytest-asyncio and asynctest are two essential packages that anyone interested in writing tests for asyncio-based python programs with pytest should consider in order to keep their test code clean and understandable.