Testing 1, 2, 3: Why Everyone Should Consider Automated Three-Layer Application Testing
Sitting down and manually testing all three layers of an application even once, let alone each time a small code change is made, would be time consuming, boring, and certainly not the best use of a QA team member or quality manager’s time. Perhaps some people still envision software testing this way, and are understandably loath to commit their time to it, and this is why there are lots of apps glitching and sputtering their way through the world right now. The reality is that the technology available for automated testing is now advanced enough to briskly test all three tiers, ensuring against defects before code is ever deployed in production. Not only that, it is also possible to automate full regression testing each time a change is committed, which will reject the commit if it does not pass all tests. This can be achieved by selecting automation tools that integrate together and are accessible through a convenient base where the source code and test logic are stored.
A large number of cloud-based apps today consist of three layers— the backend, the API, and the frontend. Even in a microservices world this is still fundamentally true. The effective functioning and performance of all application layers and dependencies should always be the goal in terms of testing. Testing any one without the others will ultimately result in defects and rework. And we do not want to do unnecessary rework. So, let’s discuss how to avoid it.
The backend is the layer where data and logic services are created and run. It acts as the application’s data broker, executing all communication to the data stores. It is usually considered best practice to have all of the business logic in the backend — or at least as much as possible. And validating that business logic requires thorough and continuous testing.
The API layer, or middleware, is the bridge between the backend and frontend. There are typically application programmer interfaces (APIs) in the middle layer. These services are often accessible not just by the frontend and backend, but by outside consumers as well. The middleware is where the HTTP methods Get, Post, Put, Patch and Delete — requests from the frontend — are exposed. It doesn’t matter if the frontend and backend are both in prime working order; if the middleware is not available to send information between them, the application is broken.
The frontend of the application is the presentation layer— the user interface (UI) rendered by the system in the form of a web page, dashboard, etc. that users or customers view and interact with. The frontend is the only thing visible to the user and so the overall quality of the application is often judged boy customer as to how well the frontend performs.
The application depends upon all three of these layers in order to function properly. It is not adequate to test just one or two of them; each of them must be tested independently and serially from the backend out. The front end depends on the middleware, which depends on the backend, so to ensure all are working, test the backend first, then the middle layer, then the front end. And, unless you want to do little else all day, do all of this with automation.
At CMHWorks, we’re a small agile shop that shoulders heavy app-building and -running responsibilities. We have to work smart to keep customers’ apps running like well-oiled machines, and implementing three-layer test automation has been invaluable toward this end. We’ll now reveal the tools and methods that are working for us.
The Right Way to Automatically Test a Three-Layer App
Our product owners write user stories; our QA team writes specific requirements, based on the user stories, pertaining to how the application should behave; then, referring to those requirements, they model the test cases around a hypothetical user logging in, doing a, b, c and so on.
Automated testing enables our developers to fully think through the logic of what they’re building, and write test cases directly into the solution. This method ensures that they’ve thoroughly thought through what it should do and how it should work before it is committed to the code repository. It is an integral part of development to automate testing of the application’s three tiers before it’s out of the gate. This protects against disruptive defects occurring in production.
Beginning with the backend, we use several products for automated testing. As a background our backend standards are: .NET Core 3.1 running on Linux in Docker containers on AWS ECS, a managed Docker Service for development, testing and staging, and AWS EKS, a highly managed, highly available version of ECS with Kubernetes orchestration for production. To test the backend, there are three testing components in .NET Core 3.1 which we use: xUnit, Moq and Faker. They allow us to create the tests, which we run through mocking data to validate that services are doing what they’re supposed to do. We run these tests for every service and endpoint we create.
Next in queue is the middleware, for which we use Assertible. Assertible allows us to automatically test the middleware and everything there, including APIs we create with a tool called NSwag. NSwag technically resides in both the backend in .NET Core as well as the frontend where it automates all APIs for us. NSwag is both front-facing and back-facing; it’s front-facing side automatically configures all the service endpoints that the frontend needs to talk to, and their own configurations. For each service we create, we simply run an update, and it generates an endpoint with which the frontend can communicate provided using Swagger . It’s a much more robust solution than many other API tools on the market, and is highly recommended. Assertible basically consumes the whole NSwag/Swagger configuration while testing the middleware.
Testing the frontend last, we rely on Cyprus.io. For every feature and component in the system, we script what the user would do and what the result should be. Cyprus allows us to convert the test cases (which would be rather mind numbing to go through manually) into automated tests we can run with a single click in the dashboard. Now is a good time to advise anyone still using the devil they know, Selenium, for frontend testing that switching to Cyprus could make their lives a lot easier. It’s simpler to use, written in JavaScript, and features a highly expressive dashboard. As a matter of fact, if frontend testing fails, you can program Cyprus to report back exactly what failed and why.
Automation and Integration Pull App-Testing Together
We chose these products not only for their advanced automation features but, crucially, because they integrate with each other and with our GitHub code repository. When testing logic is stored in one place, backend tooling is on a separate platform, API tests run on yet another, and none of them talk to each other, full three-tier regression testing can feel like a confusing game of Where’s Waldo. It consumes too much time, and it’s too easy to leave some vulnerability or point of breakage uncovered.
The combination of automation and integration trims major fat from the process of verifying code before it is committed to GitHub, and prevents defects from disrupting business and user experience. Full manual regression testing for every single code change can rarely if ever be justified from a cost standpoint, to say nothing of the time and effort involved. And manually testing a change in isolation would not find any unanticipated effects it may have on any other part of the application.
Three-layer testing for each commit can realistically and cost-effectively be done only with automation.
The extensibility of the .NET Core test components, Cyprus and Assertible testing tools to GitHub and our CI/CD server, Jenkins, ensures that changes are not made in a vacuum, and we never have to guess what the results are. For example, Cyprus and Assertible both have dashboards that display test results, and they write back to GitHub, so when someone attempts a commit, GitHub will show the results of the tests, which you can click and follow to the dashboards to see what failed and why.
When Every Commit is a Full Regression Test, Defects Are History
Our automated testing solution will fail a commit if it does not pass all tests which are written into the code — for all three layers. When a pull request is committed the code repository, Jenkins fires and runs all three tests serially, and will fail the commit if any tests fail. This means that every single commit is in effect pretty-damn-near a full regression test, eliminating defects before they are committed.
Once tests are created, they are stored and ready to automatically fire as necessary. When the code is changed, remember to adjust the tests accordingly. If we change just one view on the front end, we make sure that the tests support that. The next time any change is committed, the system will run the new tests against it. Version control on the testing logic — another advantage of having the test logic in the code repository — allows us to view test history in case we suspect logic failure in the test cases might be the cause of a bug. This is especially true if you have multiple developers working the same projects like we do.
With thorough testing of every code change now possible in minutes instead of days or weeks, there’s no excuse for deploying a buggy, defective application on a cul-de-sac back to your overworked dev team. The products mentioned in this article can be swapped out for others as long as their automation and integration capabilities are comparable. But the overall goal is 100% test coverage of all layers of your application, regardless of the platform. There is and most likely will always be a cost — in licensing or subscription fees — but it’s easily worth it for us when we compare it to the cost of rework, poor reviews and unhappy users. We think you’ll find it worth it too.