Before we dig into four-phase testing, a quick shoutout. The majority of content in this post is pulled from George Meszaros’ xUnit Test Patterns. These concepts apply to unit testing as well as automation testing.
In automation testing, there are many common test design patterns. Some include: behavior-driven, data-driven, modular, keyword-driven, and recorded testing. Each of these test design patterns offer unique positives in specific situations. All of these tests, however, share the same test structure phases:
• Fixture Setup
• SUT Exercise
• Result Verification
• Fixture Teardown
I have yet to encounter any automated test that deviated from this common design. Even the most complex tests follow this pattern, if the reader is willing to dig enough for it.
Authoring these individual test phases provide unique challenges. Introducing poorly structured or disorganized tests can increase test maintenance and lower the quality of your code base.
One thing to note before we dive into each of these four phases is that in unit test design, the word “fixture” is used when referring to an object manipulated within the application being tested.
Purpose: To put the System-Under-Test (SUT) into a specific, test-ready state. This includes all fixtures needed for testing being prepared or created.
Challenge: This phase offers a quite difficult hurdle to tackle: setup duplication. It raises maintenance, underutilizes human resources, and increases test time to develop automation that puts the SUT in a state that another test already does.
Danger Signs: Tests are creating every fixture needed to test. This method is called Fresh Fixture creation and is quite time consuming.
Tips: Use a single global test to create common test fixtures that all tests can utilize. At my workplace, we call this “System Preparation”. This method is called Implicit Setup and can reduce runtime significantly.
Purpose: To perform the given test within the boundaries of the SUT.
Challenge: Each SUT will likely provide its own unique challenges. Commonly, four-phase tests don’t have issues with this step.
Tip: If you are often struggling with this step, ensure your test design isn’t overly complex.
Purpose: To validate the SUT per the specific testing exercise.
Challenges: While most testing methods involve comparison of an expected value to the value the SUT is actually producing, verification is often performed many different ways. If your test doesn’t separate nominal/setup errors from result verification errors, it can be hard to understand if the test should stop on a given error. This can mean that important tests aren’t running when they should have.
Tips: Utilize various methods of validation your SUT. These include Delta Assertions, State Verification, Behavior Verification, Guard Assertion, or a Custom Verification model. Be sure you’re utilizing the proper verification method. Also, separate verification errors from nominal errors. Verification is the “bread and butter” of your tests. Your results should show that.
Purpose: Ensure that the SUT is prepared for the next test by removing all fixtures created for previous tests.
Challenge: Commonly, automation developers will lazily teardown all fixtures within the SUT. I can’t blame them for this though as this saves time and generally ensures that the SUT is, indeed, ready for subsequent testing. The issue here is that, while faster, tearing down all fixtures means those necessary for future tests must be recreated. That takes time.
Danger Signs: Tests are failing often due to previous tests ran before them. This is called Chained Testing. While this can save on test time, it can also increase maintenance creating flaky tests.
Tips: I recommend utilizing what’s called Automated Teardown. This entails keeping track of each fixture created within the SUT for the purpose of a given test, then tearing down only those fixtures. This makes tests repeatable and increase Persistent Fixtures within the SUT. At my workplace, we call this “Fixture Tracking”.
Complications can arise when any of these four phases are overly complex. When this occurs it can be identified as Test Smells. This means that a test has an issue but we only have knowledge of the symptom, not its cause. These could be anything for Project Smells to Code Smells. Performing proper root-cause analysis will help determine what has gone wrong in the test.