Serverless moves your unit of scale from servers to number of requests you’re serving concurrently. This means that testing your APIs and functions must be handled differently — you need to test your request or “TPS” concurrency at peak rather than worry about total volume.
One of the most common things you need to do when working with Amazon API Gateway or AWS Lambda functions is to drive load at your serverless resources. One of the best and easiest ways to accomplish this is to use a free Node program called Artillery.
Simply start by installing Artillery via npm:
npm install -g artillery
Then you can use the following command to execute a load test:
artillery run [config file here.yaml]
In order to invoke a Lambda function, you’ll need to submit an HTTP POST to the Lambda service endpoint. The endpoint looks like this:
(this is region specific, so replace with your region)
You’ll need to provide a valid AWS v4 signature as the Authorization header. Its possible to generate that manually, but we’re going to use an Artillery plugin called
artillery-plugin-aws-sigv4. Just npm install it like this:
npm install artillery-plugin-aws-sigv4
Now, you can use Artillery to generate load according to a pattern you specify in the config file. The plugin will perform AWS signature v4 signing for you.
This creates phases of load that simulate the traffic you want to test against your Lambda function. In this example, we’re creating 1 load phase that lasts 60 seconds. We start with 10 new users and ramp that up to 50 new users over the 60 seconds. You can add as many load phases as you want.
Once you run this test, it will start providing statistics on the results as the load phase ramps up. You can observe the
Requests Per Second and the details on the response latency. As you can see below, we’re seeing median latency around 120–130ms.
Artillery will continue to display test results throughout the test and finally provide you a summary report of all launched tests and the overall response codes:
As you can see, this test resulted in a 100% success rate with median response times at 127ms.
If I compare to the metrics recorded in CloudWatch I see that everything matches. Note, I have this function enabled for Provisioned Concurrency to enable the best performance.
These two metrics are useful in telling me how much of my Provisioned Concurrency allocation my test is using:
You can use this same approach to test HTTP APIs. Just modify your confg file with the correct endpoint and make sure you have a machine that is powerful enough … or use a few EC2 instances :)