P.S: This post was originally written as a single part. However, I’ve divided it into two, to make it easier to read. In the first part, I’m going to explain why DevOps matters, why is it a good fit for serverless applications, and I’ll show two different ways to unit test Node.js serverless applications. In the second part, I’ll explain how you can create DevOps pipeline based on AWS stacks.
DevOps is a set of practices that emphasise on better collaboration between development and operation teams. DevOps purpose is to deliver features, updates and fixes rapidly, frequently and more reliably (source).
Continuous Integration (CI) is a set of practices that drive development team to implement small changes and to check them frequently. Technical goal is to automate building, testing and packaging applications. Continues Delivery (CD) picks up where CI ends. CD automates delivery of application to selected environments (e.g. development, testing, production environments). CI/CD pipeline is considered as one of the best DevOps practices. Continuous integration and delivery requires continuous testing because the overall goal is to deliver a quality application (source).
AWS Lambda lets you run code without provisioning or managing servers. Just upload your code and Lambda takes care of everything required to run and scale your code. Using Lambda, shortens development time and increases velocity: you can write your code, deploy and make it functional even within few minutes. However, this speed in increase, shouldn’t sacrifice quality. To keep quality standards, you should develop your functions in a DevOps way, that is within a CI/CD pipeline. So, whenever any changes add to central repository, your pipeline makes sure that new changes, meets the minimum quality standard. This is achieved by continuous and automated tests within your pipeline.
DevOps helps you to minimize risks within your project. You can add different types of tests within your CI/CD pipeline to address different types of risk. Here are some of test types which you can add into your pipeline:
Static code analysis: to examine your code before running it. This test can help you to find bugs and vulnerabilities in your code, before they cause a disaster.
Unit test: to validate that each unit of software performs as designed. A unit is the smallest testable part of any software (source). Unit tests are cheap to run and are recommended to be one of the first test phases within your CI/CD pipeline. So, in case new code changes ruins the application, developer gets notified ASAP. Keep in mind that other types of tests (e.g. integration tests) can be much more time consuming and expensive to run. Also, added benefit of unit test is that developer can run them locally.
Integration test: to test connection and combination of your functions with the related services, as a whole group. Imagine that API Gateway triggers your function, and your function interacts with DynamoDB. There are emulators helping you to test all these locally. But for any reason, your function might not work as it should, in production. For example, your DynamoDB might have exceeded provisioned throughput. Then your function returns error to user. To avoid such kind of problem, you should perform integration test.
Your options are not limited to these, and you can add other types of tests to your pipeline.
In this post, I’m going to show you how you can you can perform automated unit tests for your serverless application with a CI/CD pipeline. I’m going to use AWS Serverless Application Model (SAM), which is a powerful and open-source framework to make serverless applications. It provides user with some great features, including ability for local development and testing. I’m going to create a pipeline with AWS CodePipeline. Combining it with SAM, provides a seamless integration that works just smoothly.
I have created a tiny Node.js serverless application (source code) that has two functions: one for creating item into DynamoDb (createItem) and one for reading item from Dynamodb (readItem). Following shows project structure:
I have separated business logic (app.js) from Lambda handlers. Handler interacts with Lambda environment, and is responsible for calling the business logic, injecting dependencies (e.g. Dynamodb client), processing event object and injecting the right input into the logic. This separation is considered as a good practice because it helps you to write more testable code, and also can help you to reuse your business logic in other environments. Let’s take a look at code:
Meanwhile, I have defined a helper function (responseFactory) in ./lib.js. This is helpful when a method is used by different components. Lambda proxy integration requires specific output formatting from the function. That’s why I’m generating all outputs with a helper function to avoid unexpected errors.
Template.yaml is a configuration file used by SAM and describes our Lambda functions and their associated resources and permissions. It’s important to follow principle of least privilege: grant only minimum required permissions to your functions. SAM offers Policy Templates which are tighter and more secure version of AWS Managed Policies. Keeping that in mind I’m using DynamoDBReadPolicy for my readItem function. This prevents the function to write or do other unwanted operations on our table, while letting the function to log into CloudWatch.
createItem function needs enough permissions to write into table and to update existing item in the table. Currently, there is no Policy Template with such permission, however SAM is flexible enough to enable you using custom policies for you functions. But using them, you have to take care of permissions by yourself. That is, you need to add policies enabling the function to log into CloudWatch (unless you don’t want logging). Please note that logging policies can be done in a more granular way and logging can be scoped to specific resource (i.e. log-group), however for brevity, I have given a more generous resource type for logging.
For testing, I have chosen Ava.js library over popular Mocha and Chai. Because it runs tests in parallel as separate process, and provides better performance and isolation for each test file (source). Also using it, your test cases can be written with shorter syntax.
When it comes unit testing serverless applications, there are at least two ways to do it: mocking or stubbing. I’m going to demonstrate both ways. It’s up to you which way you prefer to use.
Test stubs override methods to return specified and hard coded values (source). Sinon.js is a popular and easy-to-use library that can help you doing that. Following is a sample test case for readItem function using, doing stubbing. So, while running test case, it stubs DynamoDb client to return expected result, injects it into our business logic, and then continues the testing. It might be tricky to do stubbing if you don’t have access to your app dependency (in this case, DynamoDb client is dependency), that’s why it’s important to separate business logic and its dependencies.
Mocking is creating objects that simulate the behaviour of real objects (source). Generally speaking, using test mocks is discouraged because they can bring extra complexity to your application: you have to always make sure that mocked behaviours is same as the actual behaviour (source). This is especially true for tests which need to deal with real state of your application, e.g integration tests. But using mocks for unit tests is a common approach and can be the right choice. However, it might be annoying to find out how to configure mocks.
Aws-sdk-mock is a popular npm library that uses sinon.js under the hood and mocks AWS SDK. Here you can see a sample test. It mocks DynamoDb client to behave as expected, and then does the rest of testing:
So far we have created unit tests. In the next post, I’ll show you how you can create a DevOps pipeline for your tests with AWS CodePipeline.