How we write unit tests at Evodeck Software
The first time I had contact with unit tests was in 2010, I was a fresh graduate working with C# Express (for another company) and needed a way to add testing to the applications that were being developed. I recall looking it up in the C# Professional page, Microsoft had a nice propaganda with self-explanatory images on how to do it, but I wasn’t immediately into it since the established test process was to do manual testing alone and, at first, the apps didn’t seem to require more than that because they weren’t that big.
In 2013 I was a much more mature developer, I found it hard to keep maintaining apps without having breaking changes that were not caught on the manual testing process. It was exhausting, cumbersome and took a really long time but that was the established process. Given the facts, I decided to research, once again, how to properly test an app and came across a couple of articles referring Unit Testing (UT) and Test Driven Development (TDD). At first, I fell into the trap of trying to learn TDD and not UT, which is a common mistake from most of the opinions of I’ve read, but eventually settled down in just learning UT before venturing into TDD.
When I joined Evodeck Software one the perks that attracted me was the idea that tests (unit or otherwise) have to be present in all of our projects (unless the client does not really want them). This might seem not like a big thing at first but when you do a couple of projects without tests you can feel the pain of their absence.
Why unit tests?
Well, unit tests provide a simple way of testing individual parts of your code and prevent against unwanted changes. They are the base of all types of tests in software development. They test methods (units) to see if they behave as expected either by verifying their response or a change in state of an instance (these are the most common types of tests, there are other corner cases that we won’t cover in this article).
At Evodeck Software we use the AAA pattern: Arrange, Act and Assert. In the Arrange section we create the System Under Test (SUT), which is an instance of the class for the method we are testing, and any of its dependencies (more on that later!). In the Act section we call the testing method of the SUT and give it the required arguments, if necessary, to perform the test and check the expected behaviour. Finally, in the Assert section we verify the actual behaviour and compare it with the expected behaviour.
If it fails to match, the test fails. Otherwise it passes!
Consider the following Person class:
Nothing fancy here, just two public properties to expose the first and last name of a person and both have internal sets so we can change their value through the method UpdateName. Note that if any of the new values are null it will throw an ArgumentNullException. Very simple!
How do we test this?
As you can see on the first two tests we are verifying an update on the instance state and comparing it with the expected value.
On the third test we verify that the expected exception is thrown and on the last test we verify the expected parameter name of the exception.
The first two tests use the Fact attribute to run a single test while the other two tests use the Theory attribute to run the same test with different parameters, also known as Parameter Value Coverage (PVC). Also notice that the tests follow a prose like description, an approach from Business Driven Development (BDD) so tests are more readable and more organised through the use of nested classes.
Isolated Unit Tests
You might have noticed that the tests are asserting one condition alone. You might also ask “Why not use two expects and two asserts for the first and last name?”. The answer is quite simple: because we follow the pattern of isolated unit tests. We create a test per assertion to be as granular as possible, helping us identify quickly what goes wrong for an expectation when the test fails. For our test example imagine having both assertions on a single unit test:
Sure, this code base is smaller but let’s imagine, for simplicity sake, that our Person implementation does nothing. If the first Assert fails then the second Assert will never run and we will not know if it is passing or not. This can be cumbersome if you are trying to fix a bug in your code and you are changing the implementation but not getting the total feedback of the implications of your changes.
What about Setup and Teardown?
This pattern does not use Setup (arrange test data) and Teardown (clear test data after it ran), making it more readable on the long term (specially when you might have to change tests after a couple of weeks and you don’t recall all the setups on each nesting level).
This pattern is not all pros, it also has its cons. One of them is that your test code base might grow a bit big and you might find yourself repeating code (careful with the copy-paste, guys and girls!). One way to escape this is to use a factory for the SUT creation for example and use other helper methods for more specific cases.
We feel this approach pays off on the long run and has helped us identify problems easier than using the traditional Setup and Teardown.
Code coverage (CC) is also an important analysis tool to use in UT. It gives us a metric on how our tests are covering the possible paths the code can take. This helps us get an idea if our tests are covering all possible situations or if they need some refactoring.
Tip: be careful when relying on CC alone, there are certain situations where you can end up with good code coverage but not testing all possible situations.
In this example we have a method call with, apparently, two possible paths. On the first path (the happy path) the call goes through and throws no exception. On the second path (the unhappy path) the call can throw one of two possible exceptions and has a catch to handle them. Now, if we want to achieve 100% code coverage we could go for two tests, one for each path, and cover just one of the exceptions.
This would be inappropriate since we want to cover all possible situations for a unit and it could also mislead any of our coworkers.
A good practice, in our perspective, is to always try to cover the unhappy paths first before covering the happy path giving us a progressive test setup while we are adding tests until we create the happy path test.
What does it mean when we refer to dependencies? Dependencies are the implementations your class requires in order to function properly. This means any other classes that are required to perform method calls thus depending on them to perform as expected. Take the following example:
On our example above we see that our class FileWriter has a dependency for an instance that implements IDiskWriter. If we would want to test the class FileWriter we would need to mock that dependency instead of using the real implementation (otherwise it would be an Integration Test).
As we can see we fulfilled our FileWriter dependency by using MOQs Mock constructor and created a dummy instance that has no behaviour. This can become quite handy to save us from writing helper classes to help us fulfil dependencies but you can also go that way and sometimes it helps on terms of readability since it might shorten the Arrange section.
At Evodeck Software unit tests have proven to be an essential component in all of our projects even if there are no other types of tests. This does not mean that our code is bullet-proof, pitch perfect or that it will work well for all cases (don’t forget that we have to combine all the smaller parts when going to production code) but it gives us, the developers, an assurance that we know how the smaller parts work and we can expect them to behave properly.
Thank you for reading and feel free to leave some feedback.