Overview of Software Performance Testing Activities

Okta "Oktushka" N.
Software Testing/QA
4 min readJun 27, 2020

Performance testing for software is an activity or a series of test to discover how software system would behave under expected load and above its threshold. The software system’s endurance would be measured — usually by recording the peak CPU and memory utilizations over time. The unit for load can be the number of requests per second or requests per minute or whatnot — depending on your informational need.

The description of each type of performance testing can be seen below.

  1. Expected Load Burdening: It is an activity of sending requests to a server with the rate according to the expected input load. This test may also be referred to as load test.
  2. Above Expected Load Burdening: It is an activity of sending requests to a server with the rate higher than the expected input load. This test is also known as stress test.

Both the above types of test should sustain for acceptable period, usually for a short bursts of requests in 2 minutes up until constant moderate load for 8 hours; 2 minutes are assumed to be the duration where users could constantly and intensively use the software or web app, while 8 hours are the common work-hour duration in a day.

Objectives of Performance Testing

Performance testing aims to achieve the following objectives:

  1. Identify the input capacity of software system services i.e. front-end, fabric, back-end.
  2. Identify vulnerable services; it can be argued that performance testing is a part of security testing.
  3. Identify insufficient hardware resources.
  4. Identify architectural bottlenecks; barebone VS containerized.

Performance Testing Input Data

Generally, web app performance testing would be in the form of sending HTTP requests. These requests usually utilize the most common HTTP methods, which are GET, POST, and PATCH, while the other methods like UPDATE and DELETE are rarely involved.

The common web activities that are included in performance testing script are Login Request, Database Query, Web Page Browsing, Database Operation (usually insertion and updation, while deletion is rarely involved because of the data loss risk it contains), and Logout Request.

The format of the request payload is normally in either HTML, XML, or JSON format.

See below picture for the summary of the performance testing method.

Figure 1. Performance Testing Summary

Performance Testing Script Building Procedure

In order to automate performance testing, script can be developed and even be integrated into CI/CD tool such Jenkins or Travis. Below is the high level procedure in sequence.

  1. Determine the recording proxy: JMeter (some recording features could be buggy), Blazemeter (some recording features could be buggy); this may be troubleshot with another proxy app e.g. Burp or OWASP ZAP. Refer to https://medium.com/@okta.n/preparation-of-performance-load-test-with-keycloak-in-place-cf8298e696c1?sk=4e0479d4b6a7b10612201056ef14d3a3
  2. Filter up the recorded requests: Filter/remove insignificant unwanted responses i.e. image files (jpeg, png, img), css, html, js, font file (ttf), etc.
  3. Solve correlation: Most request-response scenarios involve correlation i.e. passing parameter values from a request’s response to the subsequent request, for examples: Customer ID, Invoice ID, cookie parameters (Session ID, Token ID), etc.
  4. Integrate into CI/CD tool: After the performance testing script has been developed, for instance in the form of JMeter’s jmx script, then the command line syntax to run it may be integrated into the CI/CD tool e.g. Jenkins or Travis, so that the performance testing can be initiated quickly with a click of a CI/CD menu link. The JMeter’s command line syntax itself would look something like below:

set hour=%time: =0% #This command only applies to Windows’ CMD
jmeter -n -t script-name.jmx -l jtl\report-name_${__time(“ddMMyyyy_HHmmss”)}.jtl -j log\report-log_%date:~-4,4%%date:~-7,2%%date:~-10,2%_%hour:~0,2%%time:~3,2%%time:~6,2%.log

5. Monitor the hardware utilizations: While the performance testing is running, the hardware utilizations could be monitored immediately by executing the ‘top’ command on Linux, while on Windows, they could be monitored by the following commands:

  • Monitoring CPU utilization syntaxes on Windows PowerShell:

Get-Counter -Counter “\Processor(_Total)\% Processor Time” -SampleInterval 2 -MaxSamples 10 #This would display CPU utilization every 2 seconds for a maximum number of 10 samples

or

Get-Counter -Counter “\Processor(_Total)\% Processor Time” -SampleInterval 2 -Continuous #This would display CPU utilization every 2 seconds continuously

  • Monitoring memory utilization syntaxes on Windows PowerShell:

Get-Counter -Counter “\Memory\Available MBytes” -SampleInterval 2 -MaxSamples 10 #This would display the remaining available memory size every 2 seconds for a maximum number of 10 samples

or

Get-Counter -Counter “\Memory\Available MBytes” -SampleInterval 2 -Continuous #This would display the remaining available memory size every 2 seconds continuously

Besides the above boring command lines, a comprehensive and easily recordable method to monitor hardware utilizations is to install a specialized hardware monitoring app such as Grafana. It comes with beautiful graph for every hardware utilization for every system service e.g. front-end, REST API, etc.

Based on my personal experiences, acceptable safe threshold for CPU and memory utilizations would be below 80%. At 80% and above that, you may start worrying about hardware upgrade!.

I’m not talking about hardware temperature in this article, because I assume that most software server environments — be it testing or production would be put up on the cloud e.g. AWS, so I do not think you could notify AWS and tell them “Hello! My server is boiling! Will you switch on the aircon?!”. I believe there is a Service Level Agreement (SLA) somewhere that talks about server overheating, nope?.

Conclusion

Happy testing!

--

--

Okta "Oktushka" N.
Software Testing/QA

SW QA, Internet of Things (IOT) Consultant, Solution Lead, TM Forum Associate. Worked at IT firm in Melbourne. Got PhD in IT from Universiti Teknologi PETRONAS