Step up your reactive tests with the StepVerifier


Photo by Brett Jordan on Unsplash

Tests are important. In the world of reactive programming this might be especially relevant due to the more functional approach, which requires a different style of structuring your code. As usual, the possibilities for testing are endless. Therefore, I would like to guide you through my opinionated view of how to test reactive code throughout this article in the context of the Reactor framework.

And since it would be boring in pure theory, let’s start with example code!

Borrow and inventory service

Let’s assume, we have a borrowing service for books. People can borrow books, when they are in the inventory and in stock. The borrowing and inventory services are small microservices interconnected via a REST API, like everybody is doing it nowadays. At some point, the BookBorrowService uses an InventoryClient, a small wrapper for making calls to the inventory service via Spring’s WebClient abstraction for non-blocking HTTP calls. This inventory client will be the subject of this article, because there are a lot of different kind of errors to be handled and tested.

The inventory client

In our sample application, the inventory client provides only one method, a getByName for retrieving an InventoryEntry for a given book name. Very simplified, but this should suffice for this small example.

As previously noted, we’re using the WebClient here for non-blocking HTTP calls with be compliant with the non-blocking reactive chain. The request is initiated via a get() call, followed by the URI and completed by retrieve().

If successful, the response is transformed to a Mono<InventoryEntry> at line 20. It is possible to add failure handling by adding calls to the response specification given by the retrieve method. For example, the two calls to onStatus are handling different HTTP status codes returned by the inventory service and the retryWhen method allows us to define, in which case it is reasonable for us to retry failed requests.

In your example, we build the retry specification and the web client inside the constructor. You could write a Spring configuration class as well. Both have their pros and cons.

We add custom timeouts for the initial socket connection as well as read and write timeouts for the socket to the underlying HTTP client to mitigate small hiccups on the server-side. Additionally, we define an exponential backoff retry strategy that should retry the request to the inventory service, if an ExternalCommunicationException occurred. This exception is being thrown by us, when the server returned some 5xx HTTP status code:

The max attempts for retries and the initial retry backoff are defined via configuration properties to externalize these values.

What needs to be tested?

After getting everything together, an important question arises: Which scenarios may arise that we would like to test?

By examining the code we have at least the following cases:

  • Status Code 200: OK → everything is fine
  • Status Code 404: NOT_FOUND → we expect a Mono.empty()
  • Status Code != 404 (e.g. BAD_REQUEST) → we expect a Mono.error()
  • Status Code 5xx && max retry attempts not reached → everything is fine
  • Status Code 5xx && max retry attempts exceeded -> we expect a Mono.error()
  • Service responds slowly with a delay lower than read timeout -> everything is fine
  • Service responds slowly with a delay larger than read timeout -> we expect a Mono.error()

We could retry the last case as well, but let’s assume that if the service is too slow to respond, a new request won’t help either.

Now the important question arises: What possibilities do we have to test the reactive chain? And how is this different from the usual way of writing tests for non-reactive code?

block() or StepVerifier?

Basically, there are two general directions for writing tests for the reactive chain:

  • Directly call the methods to test and receive the outcome by invoking block() on the resulting Mono or Flux.
  • Wrap these calls with the StepVerifier class provided by the Reactor framework.

As both approaches have their raison d’être, let’s formulate a first test to compare these two. I will use the MockWebServer provided by the OkHttp3 dependency to mock away the server side. The alternative would be to mock the WebClient itself, which is really cumbersome due to its fluent API. The test setup looks like the following:

The setUp method ensure that the MockWebServer is initialized and started correctly for the test class. At the end of the test class execution, the tearDown method stops the web server to release its resources. In the before method we initialize our class under test and bind the web client url to the url of the mocked web server.

After we have taken care of the test setup, we may now write our first test, starting with the block approach:

Here, we’re calling the getByName method of our class under test to check, whether a successful requests actually delivers the expected inventory entry. As you can see in line 9, we just wait for the result by calling block() on the returned Mono. So far, everything looks very familiar.

The following test ensures the same behaviour, however this time using the built-in StepVerifier, provided by the Reactor framework:

The StepVerifier.create method encapsulates the Mono/Flux returned by our method and provides a builder API to formulate expectations or assertions towards the reactive flow. Additionally, we may consume the emitted items for verification as can be seen at the call to assertNext. The chain needs to be ended with an appropriate verify method. Here I used verifyComplete, because I want to ensure that no further item is emitted and the reactive chain actually sent a completion signal. At first glance, this looks very similar. So what distinguishes these two approaches now?

Both, StepVerifier and block allow us to verify the output of the method. Was a correct result returned? Or an empty result? Did the method throw an exception? These questions are fairly simple to answer.

For the block variant the following return values would be expected:

  • Successful result: some InventoryEntry
  • Unknown resource (due to 404 — NOT_FOUND): null
  • Failed execution: Original exception is being thrown

One can now verify these results by using an assertion framework of their choice, e.g. AssertJ:

For the StepVerifier the return values stay in the reactive world, therefore we would have the following cases:

  • Successful result: some Mono<InventoryEntry>
  • Unknown resource (due to 404 — NOT_FOUND): Mono.empty()
  • Failed execution: Mono.error() with original exception being encapsulated

By using the fluent builder API we can add these verifications and assertions to the test chain:

We’ve already seen the assertNext method. We could have used another method for verification, but I wanted to use AssertJ here for the assertion. Other possibilities would be:

  • expectNext → takes an InventoryEntry to check against
  • consumeNextWith → expects a Consumer<InventoryEntry> and you may use AssertJ here as well
  • expectNextMatches → receives a Predicate<Inventory> to check against the next emitted item

For the Mono.empty() we can use the expectation expectNextCount(0), because no item should be emitted, but we expect a completion signal anyway ( → verifyComplete).

The exception case is verified via appropriate methods, following the same naming pattern. I used expectErrorSatifies to add AssertJ assertions. Possible other solutions would be expectError, consumeErrorWith, or expectErrorMatches. As the error signal is a terminating signal as well, we may not use verifyComplete at the end, but rather verify to begin the verification for this execution.

Important side note: When using the StepVerifier, don’t forget a verify call at the end. Only this starts the execution, otherwise the test would be green and nothing has been executed or verified! Like with a regular reactive chain, nothing happens until you subscribe. This is a stumbling block that is easily forgotten at the beginning.

What have we gained so far compared to the block variant?

  • Both are capable of verifying the output of the reactive operation.
  • Both can be combined with assertion frameworks.
  • The fluent builder API of the StepVerifier is quite expressive and encapsulates the inner workings away. But you better not forget a verify method call at the end.
  • The StepVerifier is closer to the reactive chain overall and let’s you consume the signals appropriately.

The last point is not to be underestimated. When using reactive programming in your production code, why not proceed using it in your test code? It is technically not necessary, however I would strongly recommend it to increase the overall understanding of underlying reactive concepts for the whole team.

Additionally, there are other aspects to test when working with the reactive chain despite the raw return value. By that I mean the reactive part of your chain, like subscription or cancellation signals, delayed execution, or schedulers. These cannot be tested appropriately with black box testing.

The test cases we defined earlier actually contain such a circumstance that can be tested by taking advantage of virtual time: the retry behaviour of our getByName method with its exponential backoff strategy.

But what’s the issue with the test setup using block?

This test takes about 10 seconds to complete due to the retry backoff of 10 seconds for the first failed attempt.

We may work around it by overriding the retry backoff for our test cases to a reasonably small value. One requirement is that we externalize the properties for the retry backoff configuration, but we did that earlier anyway, because it’s just practical:

However, we can’t verify that no event was emitted before that time frame. What if we defined our retry specification incorrectly? We may not know with the block variant.

The StepVerifier actually gives us a feature at hand that may verify this behaviour. We can use a custom scheduler for manipulating time by using StepVerifier.withVirtualTime instead of StepVerifier.create to plugin a special scheduler that may fast-forward in time to avoid long running tests:

The StepVerifier.withVirtualTime method sets up the specialized virtual time scheduler, which replaces the default scheduler for this test case. Afterwards you may add expectations to the chain, like expectNoEvent, or you can just fast-forward in time via thenAwait. Due to this, the test only takes milliseconds to complete.

However, don’t forget to add expectSubscription prior to the call to expectNoEvent, because the subscription itself obviously is a signal, though we’re not interested in for the expectation. Otherwise, the test would fail.

Important side note: The given Mono or Flux has to be generated inside the supplier function! You may not instantiate the variable earlier, like this:

The publisher needs to be generated lazily, otherwise virtual time may not work at all.

This concludes our small excursion to testing in a reactive world. When you treat the reactive chain as a black box and are only interested in the actual outcome, then it doesn’t really matter, if you are using the block or the StepVerifier approach. Both will work. In my opinion, it is however strongly recommended to use the StepVerifier anyway, so that everybody in the team familiarizes themselves with the reactive chain, from production to test code. Anything else would be some kind of paradigm shift within the code base.

Finally when it comes to virtual time or specific inner workings of the reactive chain, only the StepVerifier will suffice. And in this article I only covered the core concepts and features of the StepVerifier. Namely those features that you use on a daily base. I explicitly did not cover topics like:

  • post-execution assertions for dropped elements
  • context tests
  • TestPublisher for emulating a source or testing your own operator
  • PublisherProbe for checking the actual execution flow of data

Please refer to the Reactor reference documentation for these more sophisticated topics, in case you need them at some point.

Thanks for reading! Feel free to comment or message me, when you have questions or suggestions. You might be interested in the other posts published in the Digital Frontiers blog, announced on our Twitter account.