Performance Test with Postman Canary

Alex Rodriguez
7 min readMay 15, 2023
Gas Station has 2 pumps that fill up a full tank in 1 minute = throughput is 2 per minute everyone else waits

NOTE: Postman Canary is a preview version of the popular Postman API tester. It is intended for users who want to test the latest features and updates before they are officially released to the public. Canary releases are considered experimental and may contain bugs or other issues.

Postman Canary offers a robust performance testing capability to simulate thousands of virtual users, find bottlenecks and optimize APIs to handle high traffic. It is ideal for developers and QA engineers to test and develop faster and more reliable APIs.

Now that we have introduced this cool feature let us look into it.

1- In Postman Canary’s performance testing feature, virtual users are simulated users that are used to test how an API performs under different load conditions.

2- The test duration is the amount of time that the performance test will run for.

3- The Load Profile defines how the virtual users will interact with the API over the duration of the test.

The “Fixed” option in the load profile means that the number of virtual users remains constant throughout the entire test duration.

The Load Profile - Ramp up option in Postman Canary performance testing allows users to gradually increase the number of virtual users over time. This means that the load on the API will slowly increase until it reaches the desired maximum number of virtual users. This option helps simulate a more realistic scenario where traffic increases gradually instead of a sudden traffic spike.

EXAMPLE

Let us now create a BenchMark where we will use a test API running in a local environment. ( PostgreSQL, Express, React, NodeJS )

Sending a Post request to the endpoint /todos with JSON payload that
generates a 10 character random value

{
"name_description":"{{randomValue}}"
}


curl --location 'http://localhost:4500/todos' \
--header 'Content-Type: application/json' \
--data '{
"name_description":"{{randomValue}}"
}'

Load Profile Fixed

Scenario:
Execute 2 Virtual Users
test duration 1 Minute

Let us evaluate the results:

Total Requests sent: 108
Notes: Execute 2 users over a period of 1 minute

Requests/s: 1.72 requests/s
Notes: Remember there are factors that may affect the requests per second such as memory, network and other factors.

Average Response Time: 8 ms
Notes: The average response time is calculated by adding up the response times of each request made during the test and then dividing by the total number of requests.

Error Rate: 0%
Notes: If any of the requests returns other than status code 200 error is captured.

Getting a total count from PostgresSQL:

SELECT count(*) FROM todo;

+=======+
+ count +
+=======+
+ 108 +
+=======+

Evaluating Performance details for a total duration

Min(ms): 3
Notes: From all 108 requests one of the minimum requests that it took was 3(ms)

Max(ms): 24
Notes: From all 108 requests one of the max requests is 24(ms)

90th(ms): 12
Notes: The 90th percentile is the value below which 90% of the samples fall. This means that 10% of requests may have a longer response time than the reported 90th percentile.
For example, if a test reports a 90th percentile response time of 12 ms, this means that 90% of requests had a response time of 12 ms or less. The remaining 10 percent of requests may have a response time of more than 12 ms. In a real project, the 90th percentile is used to measure response time for most users. This helps developers and testers identify and optimize slow APIs and ensure that most requests have an acceptable response time.
For example, an e-commerce site can use the 90th percentile to ensure that most users can complete a transaction in a reasonable amount of time, thereby reducing the likelihood of cart abandonment and dissatisfied customers.

Graph Metrics Filters

Avg. response

Min Response

Max response

90th percentile

Detail filter Graph
- Virtual users — 10 requests
- Requests/s
- Avg. response
- Error rate

Throughput

Throughput refers to the number of requests processed by the server in a given time period. This is usually measured in requests per second (RPS) and is a key performance testing metric.

Throughput can help determine an API’s ability to handle requests under different load conditions and can be used to identify bottlenecks and areas for optimization.

High throughput indicates that the API can handle a large number of requests, while low throughput indicates that the API may be experiencing performance issues or may not be able to handle high volumes of traffic.

Ramp Up Summary

Virtual Users: 10
Test Duration: 2 mins
Load Profile -> Ramp up: 1 mins

Summary:

Total number of requests 804 in a period of 2 minutes with 10 users
Requests 6.49

The number of requests made during the test is determined by the load profile defined in the test configuration and the duration of the test. The load profile determines how many virtual users are simulated and how quickly they reach the specified load.

The test duration determines how long the test lasts. Based on these factors, Postman calculates and sends the appropriate number of requests to the API under test.

Throughput can vary from one performance tester to another because it is affected by various factors such as load profile, number of virtual users, API response time, and corresponding payload size.

Therefore, performance may vary depending on these factors and it is important to consider them when analyzing performance test results.

Errors and History for performance test

The server crashed and therefore generated some errors a total of 4.79%.

10% Error rate started happening for 10 requests and then followed by 100% Error for 13 requests The performance test continues but this analysis should be tracked to identify issues such as root cause for failure.

History for performance test execution under Performance runs

This screen helps you track and analyze results for different fields such as:

Total requests
Requests/s
Resp. Time(Avg ms)
Error %

Closing statement

In summary, Using Postman Canary for someone experienced in performance testing and API testing is not too hard to understand or evaluate. For someone with 0 experience hopefully, this documentation can help you understand some terminology used in performance testing.

Overall This documentation is mostly to introduce us to performance testing using Postman Canary. I am sure there is a lot of things coming up as improvements.

If you enjoyed this content, I would appreciate if you took the time to check out my other works and follow me. Thank you for your support and I hope to continue providing valuable insights and information to help you with your projects.

#performance #loadtesting #api #virtualusers #throughput #latency #response #scalability #loadtesting #stress testing #APItesting #performancetesting #loadsimulation #loadbalancing #serverperformance #responsemetrics #latencytesting #througputtesting #performanceoptimization

--

--