Real world integration testing with Serverless

Chris Andrews
Oct 22 · 4 min read

With the new area of FaaS dawning, applications can now concentrate on business logic. By using the Serverless framework, adding and wiring multiple cloud components together is just a case of adding a few lines of config.

This has had an impact on the testing landscape too. On one hand unit tests have become smaller, purer, and easier to write. On the other integration testing has become much harder but even more important.

Most existing tutorials just cover integration testing basic scenarios. Take the usual example:

Simple architecture
Simple architecture

In this instance serverless-offline is ideal. It spins up a mock API Gateway and Lambda running on your local machine. Perfect!

But what if you want something a little more involved?

More involved architecure
More involved architecure

There are other plugins that work with serverless-offline for all of these additional components, such as serverless-offline-sqs, serverless-dynamodb-local etc. Some are a little less robust and missing a few more features than others.

However these require the underlying technology underneath, and some config/set up. SQS for example requires the serverless-offline-sqs plugin and ElasticMQ setting up using say Docker.

# serverless.ymlcustom:
serverless-offline-sqs:
endpoint: http://0.0.0.0:9324
region: eu-west-1
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false

Not only that but the above config would only work for deploying your Serverless setup. If you were using the AWS SDK inside your code, you’d have to change the endpoints too:

new AWS.SQS({ endpoint: http://0.0.0.0:9324 });

With so much config, setup and custom wiring, theres just too many spinning plates and potential for failure.

Next Localstack coupled with serverless-localstack looked like it might be the silver bullet. Localstack is a pretty comprehensive suite of mocks for most AWS products.

# serverless.ymlplugins:
- serverless-localstack

custom:
localstack:
autostart: true

Using serverless-localstack means that you can run the standard deployment command sls deploy and deploy everything to Localstack.

It’s easily controlled using docker and even has a realistic lambda environment by using the lambda docker image.

Unfortunately the CloudFormation functionality provided in Localstack was missing a lot of features. And given how Serverless relies on CloudFormation heavily, this made using localstack not really an option.

So with both the main contenders for mocking — serverless-offline and Localstack both not really a viable solution, it was back to the drawing board.


Essentially what I’m looking for in integration tests is “confidence”, a high degree of certainty that all of my modules work together in the cloud environment.

Mocks are assumptions about the way in which external services behave, resulting sometimes in false positives.

So why use mocks at all? Instead of trying to mock all of the products in AWS, why not just use the real thing? Why not deploy your application to AWS and test against it there?

With the ease of deployment and speed of deploying individual functions in Serverless, the feedback loop is quick enough. So using the real AWS is actually feasible. And with no mocks, you are greatly increasing that degree of confidence in what you’ve written.

It actually requires very little config, as we can make use of the stage option to deploy to a different stage. With an example maybe looking like:

# serverless.ymlplugins:
- serverless-stack-output
provider:
stage: ${opt:stage}
functions:
status:
name: status-${self:provider.stage}
events:
- http:
path: /status
method: GET

Using a plugin like serverless-stack-output allows us to get the URLs or ARNs of the deployed resources or functions. It outputs a json file:

{
"StatusLambdaFunctionQualifiedArn": "arn:aws:lambda:xxx:function:status-stagename",
"ServiceEndpoint": "https://xxxxxx.execute-api.xxx.amazonaws.com/stagename"
}

Which makes our integration tests simple and clear:

// integration.test.jsimport axios from 'axios';
import { ServiceEndpoint as API_GATEWAY_BASE_URL } from './stack.json';
test('returns the status', async () => {
const { status } = await axios({
url: `${API_GATEWAY_BASE_URL}/status`
});
expect(status).toEqual(200);
});

When deploying using sls deploy there will be a little wait as ALL of the resources and lambdas are deployed. However if you’ve made changes to just one of the lambdas we can use sls deploy -f functionName , which takes only a couple of seconds.

You can also make use of the logs command on the Serverless cli to get the function logs, save trawling through CloudWatch.

The CI process would follow along the same vein. Spin up your application under a new stage, run the tests, then tear down the application and all of its resources.

Whilst the above is a very simple example for the sake of brevity. It can easy be expanded for more complex flows using the same idea.

That being said, this approach isn’t without its disadvantages. It slows the feedback loop down, you’re charged for each run, and debugging is harder.


This strategy yielded the least amount of tradeoffs and delivered a much higher degree of confidence. Which is essential in the new world of FaaS, as the value of integration tests has never been higher.

This is just one approach, but hopefully as the Serverless ecosystem matures so will support for integration testing in the future.

Chris Andrews

Written by

Software Engineer. London

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade