Performance/Functional tests in CICD
In this article, we discuss the role and importance of performance/functional tests for application services during CI/CD.
Imagine an application with no automated tests in the CI/CD pipeline, there is no clear way to
- test stability, fine tune the right memory/CPU cores needed to meet the SLA needs in an automated fashion.
- test end to end functionality of the application as a whole is intact for every change committed by a developer in an automated fashion.
Performance tests are written to make sure the application is
- meeting the expected response time
- can handle the SLA active users for the application
- application stability under varying loads
There are multiple tools out there to automate performance tests. We use a home grown SAAS tool at Walmart called Automaton which leverages JMeter agents for automating performance tests. The setup is as simple as configuring the application SLA needs in a json file that configures virtual users, SLA response time etc.
Functional tests are written to test end to end functionality of the application which might involve interaction with multiple services talking to each other.
We have functional tests for services written in Java Cucumber framework where the tests are written in gherkin language in a Behavioral Driven Development fashion. A typical test might look like below and would come from JIRA test suite through JIRA integration.
Feature: Is it Friday yet?
Everybody wants to know when it's Friday
Scenario: Sunday isn't Friday
Given today is Sunday
When I ask whether it's Friday yet
Then I should be told "Nope"
Performance/Functional tests in CICD:
Performance and functional tests are called in the CICD pipeline soon after a successful application service build (unit tests executed during this phase) and a successful deployment.
- The performance tests ensures the service is stable, fast, reliable and the SLA needs are met.
- The functional tests ensure end to end functionality of the application still holds good after the changes are deployed as part of the service.
From the diagram above, soon after a deployment of the application to dev/qa environments, execution of performance and functional tests kick in and the reports are sent to the slack channel/email. In case the execution of performance or functional tests fail, the deployments are marked as a failure and a rollback of the deployments kicks in.
In the case of a tests success, performance tests reports/functional tests reports are sent over to the team.
Performance tests reports:
The performance tests reports sent includes
- number of success requests
- TPS(transaction per second) achieved
- number of failure requests
These metrics can always be used to adjust the memory/CPU cores on a given pod for a container in order to achieve the SLA needs.
Functional tests reports:
Functional tests reports found in the deployment logs would help the developer make sure the end to end functionality is not disturbed with every new feature added to the application. In a typical micro service architecture, this would involve functionality tests where mutiple services may be communicating with each other to make sure the communication is still intact and aligning to the business requirements.
Automating performance/functional tests for the CI/CD pipeline will only make the application ready for the worst case scenarios in an automated fashion. It's always wise to foresee/anticipate the problems that may arise in the future because of reasons like unexpected spike in active users, application functionality check with every change of course in an automated fashion.
At the end a good developer is one who gets a good night's sleep and a proper work life balance that can only be achieved by automating things as much as possible and intervene manually only when the computing gods are angry.