Verifying Indirect Outputs Using test doubles

Mahesh Khanna
3 min readMay 5, 2024

--

Indirect outputs

In software testing, indirect outputs refer to the effects or outcomes of a system’s operations that are not immediately observable as direct outputs but can still significantly impact the system’s behavior or state. Indirect outputs may include changes to the state of the system or side effects that occur as a result of executing a function or process.

Example :

A good example of this is a message logging system. Calls to the API of a logger rarely return anything that indicates it did its job correctly. The only way to determine whether the message logging system is working as expected is to interact with it through some other interface — one that allows us to retrieve the logged messages.

A client of the logger may specify that the logger be called when certain conditions are met. These calls will not be visible on the client’s interface but would typically be a requirement that the client needs to satisfy and, therefore, would be something we want to test. The circumstances that should result in a messaging being logged are indirect output test conditions for which we need to write tests so that we can avoid having Untested Requirements

How to verify indirect output using test double?

Two basic styles of indirect output verification are available:

  • Procedural Behaviour Verification
  • Expected Behavior

Procedural Behaviour Verification :

Procedural Behaviour Verification captures the calls to a DOC during SUT execution and then compares them with the expected calls after the SUT has finished executing. This verification involves replacing a substitutable dependency with a Test Spy . During execution of the SUT, the Test Spy receives the calls and records them. After the Test Method has finished exercising the SUT, it retrieves the actual calls from the Test Spy and uses Assertion Methods to compare them with the expected calls.

spy created using mockito

Expected Behavior :

Expected Behavior involves building a “behavior specification” during the fixture setup phase of the test and then comparing the actual behavior with this Expected Behavior. It is typically done by loading a Mock Object with a set of expected procedure call descriptions and installing this object into the SUT. During execution of the SUT, the Mock Object receives the calls and compares them to the previously defined expected calls. As the test proceeds, if the Mock Object receives an unexpected call, it fails the test immediately. The test failure traceback will show the exact location in the SUT where the problem occurred because the Assertion Methods are called from the Mock Object, which is in turn called by the SUT. We can also see exactly where in the Test Method the SUT was being exercised.

ref : xUnit Test Patterns_ Refactoring Test Code — by Gerard Meszaros — 2007

--

--