Over the past few weeks, I’ve had a few interactions with many people closely connected to the QA community. One particular query stuck out during these conversations and has come up often.
How do you get started with Performance Testing?
OR
How do QAs doing manual & automation testing start Performance Testing?
After some thought, I was able to identify a few steps that can be taken to begin Performance Testing. And, no it does not involve learning a tool from day one.
- Understanding the Networks Tab in Developer Tools
The handiest tool in your browser is under Developer Tools > Network Tab.
Select Fetch/XHR filter (to view HTTP requests/APIs) and sort the “Time” column, to check the max response time taken by a request/API.
Observing the Finish Time and Load Time parameters.
The Load timer can be conceptualized more simply as a timer that counts how long it took the browser to load all of the assets that the HTML and CSS had asked for before the page began to load.
The Finish timer counts the seconds that passed between the beginning of the page loading and the conclusion of the final request the browser made.
The network tab is not limited to the examples I’ve provided, it can be used innovatively to gather many performance-related details like throughput, any image, or static resource’s size.
2. Performance related Browser Plugins
I have worked with many browser-based plugins to identify Performance related issues. One of the plugins that give you good information along with improvisation suggestions is Lighthouse.
More detail about Lighthouse can be found here — https://developer.chrome.com/docs/lighthouse/overview/
Another browser tool that has been quite useful while observing the performance of an application within the browser is enabling Performance Insights under developer tools.
This will give you a clear breakup of the DOM elements, the time is taken to load them, and where an improvement can be made.
To gather more information about Performance Insight visit the following — https://developer.chrome.com/docs/devtools/performance-insights/
There are many similar plugins & extensions that can be used other than Lighthouse & Performance Insight. Explore them & leverage them.
3. Understanding Server Resources & Their Utilizations
While the UI & its response time is important, the backbone of any good-performing application is the backend resources that are being used — Application servers, DB servers, Log servers, Downstream Applications, API servers, and any other infrastructure components that are used to support your application.
So where do you start?
- With the application architecture diagram.
- List down all the infrastructure components, and gather their backend hosting details. Get access to monitor it.
- If they are on-premise servers, get access to either a UI or a backend or to the person managing it.
- Start observing CPU & Memory utilization of the different servers that are deployed
- Get yourself familiar with the Garbage Collection mechanism that is being used in your application
- Understand what do vCPU, cores, nodes, Instance Types, RAM capacity mean
If your server CPU utilization looks anything like this, there is a problem. So, what can you do when you see/observe this
- Raise an alarm and log this behavior
- You need to validate if this is a special circumstance or if is it repetitive
- Look at logs for that time duration and identify which activity took place during that time
- Repeat the same activity and check, if you are able to reproduce it
- It is reproducible, categorize the activity as a data-heavy functionality, what type of API was being triggered, was there any UI resource that was being loaded, etc
- Take help from people who will be able to help you identify these factors
4. Reading Logs & leveraging Log Management Tools
One of the steps that I listed above is to look at Logs.
Lines and lines of logs, how do you scan through hundreds, of thousands of lines to find issues? You can start with the following
- Start searching for keywords like “Errors” or “Warnings”
- Encourage your team to capture errors — requests & responses at least in the lower environments and error logs in productions environments
- Search for HTTP Error Code 4xx & 5xx
- If there is an error in logs, map it to the server’s behavior during that time, were there any abnormalities or spikes that are notable
There are many Log Management tools as well like Splunk, SolarWinds, Datadog, etc. can be added to your infrastructure. This can help you traverse through logs easily with some UI and filters that can be applied.
5. Add a response time capturing mechanism in your automation scripts
With multiple lines of code that we add for automated regression tests, smoke tests, sanity tests, etc. The addition of a few more lines of code can help us leverage into using existing setups to gather more information related to the Performance of the system.
If you are using Selenium with Python, you can try adding the following lines to gather
from selenium import webdriver
site = "http://www.example.com/"
driver = webdriver.Chrome()
driver.get(site)
startOfNavigation = driver.execute_script("return window.performance.timing.navigationStart")
startOfResponse = driver.execute_script("return window.performance.timing.responseStart")
completedDOM = driver.execute_script("return window.performance.timing.domComplete")
backendPerformance = startOfResponse - startOfNavigation
frontendPerformance = completedDOM - startOfResponse
print "Back End: %s" % backendPerformance
print "Front End: %s" % frontendPerformance
driver.quit()
startOfNavigation —The time after the user agent has finished unloading the preceding page or document is returned by this attribute.
startOfResponse — When the user-agent receives the first byte from the server or from local sources/application cache, this attribute gives the time.
completedDOM —The time shortly before the current document/page readiness is set to “complete” is returned by this attribute.
While testing with API automation, one example is to check if the API Response Time is within the expected SLA.
import org.hamcrest.Matchers;
import org.testng.annotations.Test;
import io.restassured.RestAssured;
import io.restassured.response.Response;
import io.restassured.response.ValidatableResponse;
import io.restassured.specification.RequestSpecification;
public class ResponseTimeTest{
@Test
public void verifyResponseTime() {
//base URI with Rest Assured class
RestAssured.baseURI ="https://www.example.com";
//input details
RequestSpecification request = RestAssured.given();
// GET request
Response response = request.get();
//obtain Response as string
String responseAsString = response.asString();
// obtain ValidatableResponse type
ValidatableResponse validateResponse = responseAsString.then();
//verify response time lesser than 1000 milliseconds (SLA)
validateResponse.time(Matchers.lessThan(1000L));
}
}
The response time is obtained in milliseconds. To validate the response time with Matchers, we need to use the below-overloaded methods of the ValidatableResponseOptions −
- time(matcher) — it verifies the response time in milliseconds with the matcher passed as a parameter to the method.
- time(matcher, time unit) — it verifies the response time with the matcher, and the time unit is passed as a parameter to the method.
Conclusion
To summarize, you can get started with Performance Testing by
- Familiarizing yourself with Networks Tab in Developer Tools
- Performance-related Browser Plugins
- Understanding Server Resources & Their Utilizations
- Reading Logs & leveraging Log Management Tools
- Add a response time capturing mechanism in your automation scripts
These are to get you started with Performance Testing. Try these and share if they prove to be useful for part 2.