Setup CI/CD pipeline on AWS with Lambda and the Serverless Framework: Part 1

Lorenzo Micheli
Quantica
Published in
6 min readNov 1, 2018

Having a CI/CD pipeline setup for your project, has become more and more crucial for both developers and Product Owners.

Whether you work on a small project or a big one, a CI/CD pipeline helps developers to focus on what they like to do the most, (or what are paid for…) code new features, rather than wasting time in “merging hell” or bug fixing.

From the business point of view, a CI/CD pipeline helps to have new features in production faster.

The same concept is equally valid for both traditional and serverless applications.

In this series of articles you are going to learn step-by-step, how to setup a Serverless CI/CD pipeline in AWS using Lambda and the Serverless Framework which:

  • It is triggered by pushing git commits
  • Run unit tests and code linting
  • deploy to Staging environment
  • and to production via a manual approval gate

The Project

So let’s take a small pet project written in NodeJs that when you hit an HTTP endpoint with a timezone provided as query parameter, returns the current time for that timezone, You can find the code in our repo, structured as follows:

  • production code in handler.js
  • unit tests written in Jest in handler.test.js
  • ESLint configuration in .eslintrc.json
  • The serverless framework configuration in serverless.yml

Production Code

The handler of our function is super simple.

  1. Read the tz parameter from the query string (if not specified uses “Europe/Rome”).
  2. Validates the timezone against the list of known timezones.
  3. Print a fancy message with the current time at said timezone.
const moment = require('moment-timezone');

module.exports.gimmetime = async (event) => {
let tz = 'Europe/Rome';

if (event.queryStringParameters && event.queryStringParameters.tz) {
tz = event.queryStringParameters.tz;

if (!moment.tz.names().includes(tz)) {
return {
statusCode: '400',
body: `Unknown timezone ${tz}`
};
}
}

return {
statusCode: '200',
body: `The time in ${tz} is: ${moment.tz(tz).format()}`
};
};

Test Suite

Our test suite is made up of 3 unit tests using the Jest framework.

Disclaimer: it’s the first time I use Jest as testing framework (and I already love it), hence comments, suggestions and fixes are welcome.

The CI/CD Pipeline

Now that we have our project, let’s talk about the CI/CD Pipeline.

First let’s break down the term CI/CD:

CI stands for “Continuous Integration” which is basically the practice to automatically build and test our code each time a commit is pushed in our repository.

CD stands for either “Continuous Delivery” or “Continuous Deployment” and they don’t have the same meaning.

  • “Continuous Delivery” means that once your feature has been integrated and tested it can be manually deployed to production.
  • “Continuous Deployment” basically means that the deployment to production happens automatically.

In this post we are going to focus on the CI part of the Pipeline. There are a lot of CI tools out there: GitHub CI, Travis CI, Circle CI just to name some of them.

AWS provides a cloud service called CodePipeline, which used in combination with other cloud services, such as CodeBuild and CodeDeploy, allows you to build your own CI/CD Pipeline.

For sake of simplicity, as you are supposed to have already an AWS account to run Lambda functions, we are going to use CodePipeline and CodeBuild to setup our pipeline.

CodePipeline

First things first, we create an S3 bucket to store our artifacts.

Just pick a bucket name, a region and click Next. Go ahead with Next using default options and permissions.

Now that our bucket is set, let’s configure our pipeline. In the list of services look for CodePipeline and click Create Pipeline.

Give it a name, and select New service role in this way CodePipeline will create a new role with a wide range of permission to be able to create CodeBuild stages, interact with CodeCommit etc etc.

This choice is ok for a development environment. For a production environment, we strongly recommend to select Existing service role with an ad-hoc policy which has a more restrictive set of permissions.

The source stage

Next, we need to tell CodePipeline where to find the source code configuring a source stage. Currently, CodePipeline supports the following source providers: GitHub, CodeCommit or S3 bucket.

In this example we are going to Connect to GitHub, select a repository and a branch. By selecting GitHub webhooks, whenever you push your commits, it will trigger a CodePipeline build. Selecting the other option , CodePipeline will check the repo periodically for changes.

The build stage

Next we configure our build stage. We will use AWS CodeBuild as build provider. Click on Create project to setup our build environment.

Each time a build is triggered, AWS CodeBuild will spin up a docker container from the image you are going to choose in this step. Therefore you need to configure the build environment depending on the runtime you developed your project for. Since our pet-project is written for NodeJs 8.10 we will choose a Managed image:

  • Operating System: Ubuntu
  • Runtime: Node.js
  • Runtime version: aws/codebuild/nodejs:8.11
  • and we will Always use the latest image for this runtime version

For the purpose of this article, we let CodeBuild to create a New service role to run our build. But once again, don’t do this in a production environment. Rather specify an existing role with a tailored policy, narrowing down the set of permissions.

One more tip: expanding the Additional configuration option, there is a Timeout parameter set by default to 1 hour. This means that the build will be interrupted automatically if it last over an hour. Set it to a more reasonable value, in our case 5–10 minutes should be more than enough.

The Buildspec section of the build stage let you specify the what commands will be executed by the stage. You can choose to either type in the build commands or to run all the commands defined in a buildspec file (by default named buildspec.yml) stored in the root folder of the repository.

We choose Use a buildspec file. Let’s have a quick look at the content of the buildspec.yml:

version: 0.2

phases:
pre_build:
commands:
- npm install --no-progress --silent
build:
commands:
- npm run-script lint
- npm run test

The build process is divided in phases. In the pre_build phase we are going to install the dependencies to run our application. In the build phase we first run some code linting with ESLint and then we run the tests.

Now click on Continue to CodePipeline.

As we will see in the Part 2 of this tutorial, we want to leverage the Serverless Framework to deploy our Lambda function, we can safely Skip the deployment stage.

Now our pipeline is complete and it will run automatically. If everything has been configured correctly you should see the Source stage and the Build stage succeed.

Congratulations you have just run your pipeline!!! From now on, when you push your commits your pipeline will trigger your build.

What next?

In the Part 2 of this tutorial we will focus on the CD (“Continuous Delivery”) part of the pipeline, in order to deploy our artifacts first to staging and then to production with a manual approval gate. Stay tuned, stay serverless!

Are you looking to hire the best software developers, cloud experts, IT consultants.. you name it.. for your project? Touch base with us at www.quantica.io we are looking forward to hearing from you!!!

--

--