Testing ASP.NET Core Web APIs — A Detailed Guide

Nehmé Bilal
Tech @ Earnin
Published in
18 min readMay 6, 2021

--

ASP.NET Core is a cross-platform, high-performance and open-source framework for building modern web applications. Among the many features supported by the framework, ASP.NET Core can be used to build fast and secure Web APIs. It also has first class support for dependency injection and testing, and benefits from a special treatment by most cloud providers such as Microsoft Azure and Amazon AWS.

In this article, I will show you how to fully test your ASP.NET Core Web API and discuss best practices that you can use to keep your code and tests clean. While the article is focused on ASP.NET Core, most of the concepts discussed are applicable to other frameworks and programming languages. We will discuss the following:

  • Testing the API layer (aka controllers) in isolation.
  • Testing the storage layer in isolation.
  • Testing the integration with downstream services in isolation.
  • Testing the business logic in isolation from the database and other network resources.
  • The use of mocks versus stubs.
  • Testing the dependency injection layer to avoid startup failures due to unwired dependencies.
  • Build tests that can run in parallel with no side effects on each other (even when interacting with the database).
  • The use of code coverage tools.

One of the goals of the article is to provide a clear methodology for writing tests and clarify what should be a unit test versus what should be an integration test, which is often a confusing choice.

Note that the examples discussed in this article can be found in this GitHub repo (if you prefer to view the code in your IDE).

Use Case

To demonstrate the concepts discussed in this article, we will develop a simple Profile Service. The service exposes an API that can be used to manage users profiles. The APIs are as follows:

The controller implementation is shown below. Note that the FullProfile object represents the user’s profile in addition to data aggregated from other sources such as the employer info (employer’s address, phone number, etc). The Profile object contains data that is editable by the user, including his employer name. To get the employer info, we use a third party external service.

The Service Layer Interface is As Follows:

Few Things to Note so Far:

  • The controller does not contain any business logic, its main responsibility is to translate HTTP requests to model objects (and model objects to HTTP responses) and then delegate the work to the service layer, which handles the business logic. In addition, the controller is responsible for translating exceptions thrown from the service layer to HTTP status codes.
  • The service layer is not aware of request / response objects and it’s agnostic to the fact that it is being called from a controller. This gives us good separation of concerns and focused responsibilities, which makes the code more readable, testable and reusable.
  • Looking at the IProfileService code documentation, you would notice that I only documented the exceptions thrown from the service layer interface and did not describe methods, parameters and return values. This is because the method names and input/output are self-explanatory and would not benefit from redundant comments such as “returns the profile of a user”. Only the exceptions thrown are not obvious in this case.
  • Notice that I didn’t handle StorageUnavailableException nor EmployerServiceUnavailableException in the controllers. This means that if either of these exceptions is thrown by the service layer, the API will return 500 (internal server error), which is not the desired behavior. In fact, it’s better to return 503 (service unavailable) in this case to let the caller know that this is a temporary issue that may be recovered with retries. To return 503, we could catch these exceptions in every controller method and return 503 but that would lead to duplicate code. A more elegant solution is to use an ASP.NET Core middleware that handles such exceptions in one place as shown below.

Controller Tests

This is enough implementation for us to start writing the tests for the controller layer in isolation from the service and storage layers. Note that this is a top-bottom approach where we first implement the controllers and corresponding tests, then the service layer and corresponding tests, and finally the storage layer and corresponding tests. Alternatively, we could use a bottom-up approach or even write the full implementation first and then write the tests. One advantage of the top-bottom test-driven approach is that it helps isolate the tests of a given layer from the implementation of the layer below. In fact, with the top-bottom approach, at the time when the tests of a given layer are written, the implementation of the layer below does not yet exist. I personally alternate between top-bottom, bottom-up or even unstructured approaches, depending on application-specific factors.

Because we want to test the controller in isolation, we will use a mock to replace IProfileService with a test double. The mock can be configured to simulate the happy path or edge cases. We will also use ASP.NET Core TestServer class to start an in-memory HTTP Server that behaves like a real server (without the network overhead). To keep things concise, I only included the tests of the GET profile API, which are shown below.

For every test, we first create a TestServer in the InitializeAsync() method, which is called before the execution of every test (see IAsyncLifeTime interface of xUnit). By using Program.CreateHostBuilder, we are using the same Startup class and initialization code used to start the real web server (including dependency injection). Notice that we added the mock implementation of IProfileService to the Dependency injection (DI) container, which overrides the existing dependency in the container. This allows us to swap the real implementation of IProfileService with a mock when running the tests, which makes it possible to test the controller in isolation and to simulate edge cases.

The first test (GetProfile_HappyPath) ensures that the happy path works as expected. This includes testing the following:

  • The HTTP request is mapped correctly to the controller method. In fact, if we make any changes that affect the URI of the profile resource or the HTTP verb used, this test will fail.
  • The IProfileService is called to retrieve the profile.
  • The profile is serialized correctly to JSON and returned to the caller.
  • The correct status code is returned (200 in this case).

The second test verifies that 404 is returned when the user is not found and the 3rd & 4th tests verify that 503 is returned when we’re unable to reach the database or the employer service. Note that the 3rd & 4th tests are not only testing the controller but also the ExceptionMiddleware, which is the one handling the mapping of exceptions to 503. If the middleware is not wired, these tests will fail. Also notice how convenient it is to use a mock to simulate exceptions, which would be very difficult to achieve with the real implementation.

Finally, notice that I used private methods to avoid repeating code in some of the tests. In fact, tests should be kept clean and are subject to the same coding standards as production code (i.e. it’s not ok to copy/paste code in tests). As Uncle Bob puts it elegantly: Tests are as important to the health of a project as the production code is. Perhaps they are even more important, because tests preserve and enhance the flexibility, maintainability, and reusability of the production code. So keep your tests constantly clean.

Once the tests are passing, it’s important to check that we have covered all the relevant branches and lines of code. One simple way to do this is to use a code coverage tool. The image below shows the output of the code coverage tool in Rider IDE. Notice the green dashes on the left of covered code as opposed to white dashes on the left of uncovered code. This shows that our tests cover both the happy path and the 404 code paths. Similarly, the code coverage report shows that the ExceptionMiddleware is covered with tests (middleware screenshot omitted).

A Note On Code Coverage

It’s important to note that 100% code coverage does not necessarily indicate good tests and is not even a confirmation that you have written all the needed tests. The tests must verify that the code behaves as expected and catch as many bugs as possible; which allows you to continuously improve your implementation without worrying about introducing new bugs. In fact, it’s possible to achieve 100% test coverage using tests that do not contain the appropriate assertions. For example, in the GetProfile_HappyPath test above, I could have stopped at line 44 where I am asserting that the HTTP status code is 200. The test would pass and the code coverage report would show high coverage even though the test would not be verifying that the controller is returning the appropriate response body. So you should only rely on code coverage tools to identify missing tests, not as a way to measure the quality and completeness of your testing.

There are no tools to tell you whether you have written all the needed assertions (yet). Missing assertions (or tests) are usually revealed by bugs that are often discovered in production. Of course it would be preferred to identify the missing assertions upfront to avoid production bugs. Code reviews can be an effective way of identifying such issues (including reviewing your own code/tests). In fact, the more readable the tests are, the easier it is for code reviewers to spot the missing assertions and tests. This is just another reason why making the tests readable is very important. In fact, tests that are hard to read are often skipped over by code reviewers. Manual/automated testing done by QA and Dogfooding is also often a good way to discover bugs.

Another interesting aspect of code coverage is that it can allow you to discover issues in your tests and at times bugs in your code. In fact, one can write a test thinking that it covers a certain branch of the code but in reality it doesn’t. The test passes but for the wrong reasons (often a matter of bad luck). Looking at the code coverage report would reveal that the code is actually not covered and that the test is buggy. Fixing the test can also reveal a bug in the code. I’ve seen this happening on more than one occasion.

At Earnin (where I currently work), we use SonarQube to analyze code coverage on every pull request. This allows us to set a minimal code coverage requirement as a PR gate. SonarQube also allows code reviewers to conveniently open and visualize the code coverage report associated with a given PR, which is much easier than pulling down the code and running code coverage analysis in the IDE. After we started using SonarQube, we noticed that the overall test coverage of the repo started climbing. Without such an automated CI gate, it was hard to tell whether a given PR had enough test coverage and developers weren’t writing enough tests. So I highly recommend using something like SonarQube to monitor test coverage on PRs.

Implementing and Testing the Service Layer

Now that we have implemented and tested the controller, it’s time to move on to implementing and testing the service layer. The service layer is responsible for orchestrating the handling of a request. It manages the interaction with the storage layer and other parts of the application that are needed to handle requests.

As you can see below, the profile service implementation in this case is very simple, it delegates the work to IProfileStore and IEmployerService. There is only a bit of logic in the implementation to handle the case where a user has no employer.

The IProfileStore and IEmployerService interfaces are as follows:

This is enough code for us to test the service layer in isolation, as shown below.

Notice that I used a stub (implementation below) to replace the storage layer with a test double, instead of a mock. In fact, the behavior of the storage layer is largely defined by its state, which makes it easy to create a corresponding in-memory implementation that behaves exactly like the real database (aka a stub). Stubs make the tests more readable and have the advantage of being reusable across multiple tests. Mocks on the other hand must be configured in every test but have the advantage of being more flexible (they can do anything you want them to). Mocks are also great for testing edge cases, such as simulating that the database is down (by throwing a certain exception), which is hard to do using a reusable stub. If you find yourself passing flags to a stub to make it behave a certain way (e.g. to make it throw an exception), it’s a sign that using a mock is more appropriate in this case. Mocks Aren’t Stubs is a good article that discusses the use of stubs versus mocks in more depth (authored by Martin Fowler).

Below is the code of the ProfileStoreStub. Notice that I used AsyncLock from AsyncEx library in combination with a Dictionary, instead of using a ConcurrentDictionary. This makes it simpler to avoid race conditions when doing certain non-atomic operations such as checking if a key exists in the Dictionary before updating the corresponding entry. It’s important to favor simplicity and correctness over saving a few CPU cycles when implementing stubs. It’s also important to ensure that the stub is thread-safe when it’s shared between multiple tests that can run in parallel. In some cases, even if the stub is not shared between multiple tests, it can be called from multiple threads within the code under test (e.g. if the code under test uses Task.WhenAll).

The IEmployerService can also be implemented with a stub because its behavior is mostly defined by the available employer list, as shown below.

One question that often comes up is whether we need to write tests specifically for the stubs, to ensure that the stubs don’t contain bugs. We can but it’s not necessary because the stub is indirectly covered by the tests that use it as a test double.

Before wrapping up the testing of the service layer (and submitting a PR to your teammates), it’s important to run test coverage analysis to make sure that you didn’t miss part of the code. I am getting 100% coverage in this case so we’re probably ready for a review (remember that 100% code coverage only tells us that we have no obvious test cases missing, as opposed to that we have written all needed test cases).

Implementing and Testing the Storage Layer

So far we have implemented and tested the controller and service layers. Notice that we were able to implement and fully test these layers without having written a single line of code of the storage layer, other than defining its interface. In fact, we haven’t even decided yet which database technology we’re going to use. That said, in some cases, the features supported by the chosen database (e.g. support for ACID transactions) can have an impact on the storage layer interface design (not in this simple application though). You’re probably thinking that a good abstraction should not depend on the implementation. While this is true to some degree, in practice, the interface we choose makes some assumptions about the implementation. If we make late changes to the interface of the storage layer, we would need to update the stubs and the impacted tests. This is one argument for implementing the storage layer and its interface together, to avoid subsequent refactoring due to the characteristics of the database we choose.

Given the simplicity and non-relational nature of our profile entity, we can use a NoSQL database such as DynamoDB. There are many other considerations that go into selecting a database such as throughput, consistency, cost and more, so please don’t consider my simplistic analysis a guideline. The DynamoDB implementation of IProfileStore is as follows.

Notice that we always catch AmazonDynamoDBException and wrap it with StorageUnavailableException. This is important because the IProfileStore interface should not throw any implementation specific exceptions but only exceptions that are documented on the interface. This makes it easier to stub the interface and also makes it possible to replace the DynamoDB implementation with another (e.g. Azure Cosmos DB) without changing the interface.

The storage layer is a boundary layer in the sense that it talks directly to an external system through the network (using a 3rd party SDK or an HttpClient). This is also the case for the IEmployerService implementation which talks directly to an external service. The most reliable approach to test boundary layers is using integration tests. This allows us to ensure that we are adhering to the external service’s data contract but also that the external service handles our requests and responds to us in the way that we expect. In addition, the integration tests allow us to verify that we have the appropriate configuration and credentials (e.g. connection string, Api key/token, etc).

Some databases come with an in-memory stub (often called an emulator) that behaves exactly like the real database. This is particularly useful if you can spawn up a new instance of the in-memory database in every test (if this is fast enough). This approach takes away the risk of conflict between tests because each test has its own isolated database. If you use such an emulator, make sure to complement it with few tests that hit the real database, just to verify that your connection to the real system is working.

In this article, we will use the real database for testing, which is the most common approach when we have no in-memory emulator. Below are the integration tests for the profile store.

A Few Things to Note:

  • The integration tests are unaware of the implementation of IProfileStore being used. This makes them reusable with almost no change should we decide to use a different database.
  • We use the DI container to create an instance of IProfileStore. This means that the integration tests will exercise the implementation of IProfileStore that is currently being registered in the DI container. In other words, we could register a different implementation of IProfileStore and the integration tests above would still work.
  • Every test uses a unique GUID as a username, which is the row key used in our DynamoDB table. This is very important because tests may run in parallel and can conflict with each other if we do not ensure that they use separate database entities. It’s important to use a GUID (or any other way to generate a unique username) even if the tests are restricted to run serially. In fact, even though the tests may be running serially on your machine, they may be running at the same time on someone else’s machine or even in the CI build. For cases where the unique ID is an integer, you may still use a random number but it’s more likely to have collisions than with a GUID. In practice, if you’re cleaning up the database at the end of each test, a random number would work well (unless you have thousands of developers running the tests at the same time). Another more reliable approach to generate unique integer IDs is to use an auto-increment ID on a database like MySQL or Redis. You would put this behind an interface (e.g. IUniqueIdGenerator) and call it at the beginning of every test to get a new unique ID.
  • We perform a cleanup at the end of each test to keep the database clean. This is done by implementing XUnit’s IAsyncLifetime.DisposeAsync(), which is guaranteed to run after each test, even if the test fails.

Running test coverage reveals that some edge cases such as catching AmazonDynamoDBException were not covered by our integration tests. In fact, it’s very difficult (probably impossible) to reliably simulate such edge cases using the real database. Such edge cases are best covered by unit tests that take advantage of highly configurable mocks. Below are the unit tests that we use to complement the integration tests to accomplish full test coverage of the storage layer. Notice that unlike the integration tests, these tests are specific to DynamoDB.

Testing the IEmployerService implementation is very similar and requires both integration and unit tests. We will not implement these in this article because they do not allow us to illustrate any new concepts. Make sure to do it in your application though!

Testing that the Dependency Injection Container is Wired

As you probably know, the dependency injection container is wired at Startup. This is typically done in the Startup class as follows:

If you look at the test coverage report, you will find that this code has already been covered by both the controller tests and the storage layer integration tests. However, try removing line 6 above where the IProfileService is being injected and run the tests, they will still pass! This is a good reminder that it’s not enough to write test cases that cover all the code but you need to include the right assertions. The tests pass even if we exclude the IProfileService registration because we stubbed out the IProfileService in the controller test and none of the tests is actually resolving the IProfileService from the DI container. In fact, when we tested the ProfileService class, we manually created an instance of it using the constructor, as opposed to resolving it from the DI container. This can be the case for several of your dependencies, which is why it’s important to write DI tests to ensure that all dependencies are wired.

The DI tests are as follows:

Notice that I did not try to resolve the controller, which would fail. In fact, the controller is registered by ASP.NET Core when we call AddControllers() and is resolved automatically by ASP.NET Core when we send an HTTP request to that controller. We do not need to test this here because we already have controller tests that ensure that our controllers are wired, otherwise the TestServer would not be able to respond to our HTTP requests.

Am I Done?

To know whether you’re done (for now), make sure that you have written:

  • tests for all the classes in your application,
  • integration tests for all boundaries layers (IProfileStore and IEmployerService in this case),
  • tests for all API controllers to make sure they can handle http requests (including deserialization/serialization of request/response),
  • tests for the dependency injection layer to make sure that all dependencies are injected and can be resolved successfully.

Then, you can use a code coverage report to find code that isn’t covered (or is partially covered) by tests.

One approach I like to use to find missing test cases is to try and introduce a bug in the application code that would not be caught by existing tests. Look at the code under test and ask yourself whether the test you have just written would catch all the bugs that you could introduce in the code under test. Then add the missing test cases (or fix existing tests) and try again. This can be all done in your mind without actually introducing the bugs or running the tests. In fact, I also do this in my mind while code reviewing other people’s code and it often helps me identify missing test cases.

Keep in mind that no matter how good your tests are, there will always be bugs that can slip in. When you find a bug, it’s important to add the missing test cases first and make sure that the test fails (it catches the bug). Then, fix the bug and rerun the test which should then pass.

If you keep improving your tests over time, you will eventually get to a place where it’s very hard to introduce a bug, at least in existing code. New code will always come with some risks but the better you get at following best practices in testing, the lower that risk becomes.

Key Takeaways

  • No business logic in the controller, the controller is just a translator between the HTTP endpoints and the service layer.
  • ASP.Net core middlewares can be used to avoid code duplication, such as handling common exceptions.
  • It’s a good practice to handle transient exceptions and return 503 instead of 500.
  • Controllers can and should preferably be tested in isolation from the service layer by leveraging mocks.
  • The responsibility of controller tests are to verify that HTTP requests/responses are mapped correctly to service objects, that the correct service functions are invoked, and that the service response is translated to the appropriate HTTP status code.
  • Only document what cannot be expressed clearly with code (e.g. exceptions thrown for an interface).
  • Clean code best practices (e.g. DRY, readability, short functions, etc) apply to tests as much as they apply to production code, if not more.
  • Readable tests help code reviewers spot missing assertions and other issues in the tests. Unreadable tests cause code reviewers to skip over the tests.
  • Code coverage tools can be used to identify untested code.
  • Using a code coverage tool such as SonarQube to monitor code coverage on pull requests encourages developers to write more tests.
  • Looking at the code coverage report can reveal bugs in the tests and/or the code under test.
  • High code coverage is not a measure of the quality and completeness of the tests.
  • When you find a bug it’s important to write a failing test first, then fix the bug. If you write the tests after fixing the bug, it’s hard to confirm that the test actually catches the bug.
  • Choose stubs over mocks when the behavior of an object is fully defined by its state. Stubs are reusable and lead to more readable code than mocks. Mocks on the other hand are more configurable and are a great fit for testing edge cases, such as exceptions due to network errors.
  • Stubs should be thread-safe if they are used in a concurrent environment.
  • It’s important to wrap internal exceptions (e.g. DynamoDB exceptions) with custom exceptions to avoid coupling the abstraction to the implementation.
  • Classes that talk to an external system (a downstream service or a database) using the network are good fit for integration tests.
  • When implementing integration tests, it’s important to keep concurrency in mind to avoid race conditions between tests running in parallel. This can be done by using unique IDs each time the tests are executed (using random IDs is a popular and simple approach).
  • It’s important to test that the dependency injection container is wired (all dependencies can be resolved) to avoid startup issues.

Interested in being a part of a collaborative engineering culture? Come join us at Earnin.

--

--