Cost Analysis of Azure App Service API Performance using JMeter

Rajeev Kalal
Version 1
Published in
5 min readMar 20, 2023

Performance and Load testing are an integral part of the Software Development Life Cycle. Race to launch high-quality applications as early as possible, and time to market play an important role in determining the application’s success. Scalability, reliability, and security of the Cloud reduce the overall management and costs of your infrastructure. The reaction of an application under an excessive load finds the performance bottlenecks and correct thresholds so that it helps in setting up the right SKUs. Application performance simulating heavy loads ensures that downtime doesn’t derail a business’s cashflows at peak business times.

Pre-requisites

  1. API’s hosted in Azure App service.
  2. JMeter Taurus Performance task implemented on the Azure DevOps pipeline.
  3. Taurus YAML file is configured in the Azure DevOps pipeline.
  4. The pipeline job is generating the JMeter HTML reports as an artefact.

Architecture of the Azure Hosted APIs

  1. API is created using the ASP.NET MVC Framework.
  2. API is developed considering best practices and addresses most of the NFRs(Non Functional requirements)
  3. API is well integrated with Application insight and stores the secrets in Key Vault.
  4. API has been hosted on APP Service and connects to backend Azure Cosmos DB and is used for CRUD operation on the incident entity.

Criteria considered while scanning

  1. The total number of samples considered for every SKU is 8900.
  2. Recommended price tiers were considered when doing this analysis.
  3. More than 1000 samples are not considered as the execution time was very high

Performance Metrics

Metrics captured in the Cost analyses

  1. Concurrent users
  2. Ramp-up time
  3. Iterations
  4. Total hits
  5. Succeeded
  6. Failure
  7. Response time (Ms.)
  8. CPU Percentage (Average %)
  9. Consumed memory (Average %)
  10. SKU Type
  11. Cost

Metrics captured from Azure Portal

  1. CPU Percentage (Average %)
  2. Consumed memory (Average %)
  3. SKU Type
  4. Cost

Metrics captured from JMeter Taurus HTML reports

  1. Total hits
  2. Succeeded
  3. Failure
  4. Response time (Ms.)

Type and cost of SKU’s

Thread Group

The thread group element controls the number of threads JMeter will use to execute your test. The controls for a thread group allow you to:

  1. Set the number of threads (Concurrent users)
  2. Set the ramp-up period (Ramp up time)
  3. Set the number of times to execute the test (Iterations)

Each thread will execute the test plan in its entirety and completely independently of other test threads.

How to Analyse and capture the performance metrics from Azure Portal

  1. Provide the Configurations in the Taurus YAML file such as Concurrent users, Ramp-up time, and Iterations
  2. Run the Performance test job in the pipeline with these configurations.
  3. Record the start time and the end time of the Performance test job execution.
  4. Go to the Azure portal.
  5. Click on the App services icon
  6. Select the App service which is being tested.
  7. Click on the App service plan in the left side pane.
  8. Under monitoring click on Metrics.
  9. Click on the Add metrics icon.
  10. Select the Memory Percentage option in the Metrics dropdown.
  11. Click on the Local time icon in the top right corner.
  12. Click on the customs button.
  13. Select the date and enter the start time and end time recorded in step 3.
  14. Click on the Apply button.
  15. Record the average CPU Percentage and Average Memory percentage.

How to capture the performance metrics from JMeter HTML Reports

  1. Download the HTML report locally and unzip the file.
  2. Click on the .html file inside the unzipped folder to open it in a browser.
  3. Record the metrics samples as Total hits.
  4. Record the Metrics such as Failure and Average Response time(ms).

Combination of thread groups used in analyses

Seven different combinations of concurrent users, Ramp-up time and Iterations are used in the analysis. Concurrent users are in a range from 10 to 1000. Ramp-up time is in the range of 1 to 140. Iteration is 1 for all combinations. The same combination is used for all the different SKU types.

Metrics captured for each SKU’s

The performance metrics are recorded for each SKU type as shown in the table below.

Analyses of the metrics captured

  1. Out of 8900 hits, we have 190 failed requests.
  2. Out of 8900 hits, we have 8710 successful requests
  3. The average response time is 786ms.
  4. The average CPU percentage is 0.71%.
  5. The average consumed memory is 13.73%.

Analyses of the metrics for all SKUs

Summary in a chart

Average response time per SKU

Failed requests per SKU

Conclusion

Based on the analysis that we have performed and kept those caveats in mind we have observed the following.

  1. SKU’s P2V3 has the highest success rate.
  2. SKU’s P2V3 has the lowest average response time.

Based on the above analyses P2V3 is the recommended SKU for good threshold and scaling of API performance and costing.

About the author:
Rajeev Kalal is a Test Automation Consultant here at Version 1.

--

--