Without a organized and smart test capability, new development can be very costly as system complexity grow. Building, deploying, rotating human resources, innovation etc become troublesome as the existing code base take more effort to validate. Retain the reliability and stability of the systems get ownership over development and innovation.
What must be done, is to increase the ratio of development and innovation. Simply by transfer the human work of verify and validate system integrity, into machine time. Human resources get more time on verify and improve quality and stability of what’s being developed right now.
Some would say that testing is not for the IT architecture board. But true is that the way software is designed and governed, ends up as blueprints on the way that testing can get done and automated.
Begin early. There is usually a technical debt in implementing and improving test automation architecture late. By those means, I phrase and locate the context as “Test Automation Architecture”. It contain a framework of test strategy as a part of the architecture, the technology strategy.
The previous model defines business and IT as jointly connected. We ensure that technology enables business in a expected way — As well as IT counts for that business provide the fundamental building area — the plan — for which to fulfil the expectations.
This model show how test coverage starts at the business demands. But represent the test levels in how the product re-connects to agreed quality and approval in the final product. More of those three will come in the rest of this article.
The test automation, along with new development, is here to reduce the time, effort and increase frequency needed to ensure the stability and integrity of the system, while it undergo changes and modifications due to development and external stress factors.
Meaningful test approaches
The automation test coverage is solved in three test automation pipelines. Think of a complete Lego product as an parable.
The code level approach, the unit testing, verifying the integrity, function and expected outcome for every each single piece, that make up the Lego product. Those should usually be very small and validate just the purpose they are built for.
The API level approach raises the sight, which purpose is to ensure that depending external resources, works as expected. I.e. pipes, roads, emergency, power, letters etc can connect to the Lego product to provide service. It might be referred to as integration tests, but the API level is there to verify the validity and operability of the others operating components or services. I.e. by act as the system and make simple data request and response queries, to validate connectivity- data- and response times. Validate the actual response inside the code of the system, is a unit test, not API.
Finally the UI level approach is acts as the user would act. Its when one actually park the car in the garage of the Lego product. Which require certain parts of the system to work as in reality. This approach can quickly add complex dependencies and conditions to work. This level of tests is to be chosen and defined primarily with business in focus. And hopefully together with business, that helps setting goals and expectations. In return, the test reports is created and delivered regularly. I.e. each month. These tests can be complex to setup and involve many steps and integrations. But they must not be complex or flaky to maintain. Test data is not allowed to fail because of expected conditional changes in the environment.
In general for all levels — The tests must not necessary be exactly the same for each environment (but should be, to avoid extra work with manual exceptions and configuring). The automation can and should be automated in common supported CI/CD tools such as TeamCity or Azure / Amazon DevOps.
Count the number of tests per test level, the summaries should form a pyramid structure such as below. Defining and maintaining tests is usually more complex the higher up in the pyramid. Means more expensive in perspective of trade-off against new development.
It’s a good practice to ensure that test between the levels not overlap. Optimally, the testing coverage extends and completes, due to the hierarchy of the levels. A reason is that we don’t want to introduce waste and unnecessary maintenance. This “classic” pyramid visualisation is therefore not fully true.
It would be more true if the pyramid had white gaps right below the area of upper layer. A graph like the one at left might be more descriptive.
This principle provides more accuracy in conclusion where to start analyse, when tests fails. As the diagram below, see it as if it was colours that could change based on failed/success.
I.e. If there is failure in API and in UI, the problem is likely in API. If API is fixed and the UI still fails, then continue investigate in UI or in the Unit/Code tests.
“If it fails in production or in a unit tests, but does not fail in a UI or API test, do we need to consider extending the test coverage?” — Yes likely. Each production correction may candidates for adding a test at a level where it make sense.
Reporting, description and styles
UI tests has to be a individual suite and config per each running test environment. Use BDD inspired style when (behaviour driven development) writing test scenarios, by using “Given x when y then z”. Example;
Story: A user that access the application
As a user
In order to do my daily intended work
I want to be able to use the application
Scenario: Availability of Application
Given that I have visited the webpage-url
And stated my login credentials
When i was requested those in the start page
Then I should be able to start work directly
Optimally, the reports is constructed automatically within the DevOps pipeline along with the level tests. I.e. by a tool such as Newman or Pickles. Reports can be sent automatically as a link, email or attachment to the concerned stakeholder — i.e. business user or product owner.
API tests validate and verify the data exchange with Expects. I.e. verifying the response schema or certain elements within the response. BDD given, when then style is fully okay, but might be unnecessary overhead to define here. The API tests should be quite changeable. Following table is an example of live report in API level tests.
Column descriptions; Collection — The collection of Expects/tests performed within the script. The naming convention infer that there is a collection of tests per test-environment, per system-perspective and per depending-endpoint. Foundation — Basic accessibility of the service and it’s response time. Validation — Get and validate the schema of each API Method that is needed. If necessary, validate relevant values in the response. Exception- Provoke a error scenario and validate the error response to what expected.
Unit tests is recommended to be written as BDD style, but can be done in any way that make sense and is meaningful for the actual test situation. The format and naming is less restrictive as they will usually not be exposed for anyone outside the development team within IT. The reporting should in normal conditions be perfectly fine kept within the development team and the CI/CD tool.
Strictly speaking, the test architecture does not govern how the automation and CI/CD is established. A widely accepted tool or platform is of help to track deployments, commits, history and accessible configuration. But any method or technology that are capable of performing the automation and enable the development team to focus manual test effort on new development, is likely prone for success.