Setup CI/CD Pipeline for AWS Lambda using Github & Travis CI

Luke Mwila
Dec 31, 2019 · 9 min read

I received relatively good feedback for a post I wrote on storing and fetching data from DynamoDB using AWS Lambda. To date it continues to be one of my more popular publications. However, if you’ve read through it, you probably picked up that it was somewhat introductory and perhaps would be helpful for POC (Proof Of Concept) based work, but the deployment strategy used falls short for what would be required by most real-world applications.

You don’t want to be deploying to the cloud and the different stage environments using the CLI on your local machine when you can automate the process. Instead, we can use our project’s repository and a CI/CD tool like Travis CI to automate deploying our cloud functions to the different stage environments to help ensure that we only deploy quality software to production.

If you want to skip ahead and jump straight to viewing the source code, here’s the link to the repo.

What is CI/CD, how does it help and why be bothered?

Let’s start with the ‘why be bothered’ bit because it contains the purpose or goal that we’re trying to achieve, whereas CI/CD is simply a method to help us get there. The goal is to enable development teams to release a constant flow of quality software updates into production and to speed up release cycles by automating the various steps or stages in application development.

I’ll try to summarize and simplify CI/CD. It can be referred to as a method or a set of operating principles to accomplish the goals outlined above. When the term is used it often refers to three main concepts: continuous integration, continuous delivery and continuous deployment. The latter two are sometimes used interchangeably.

Continuous Integration — Continuous integration focuses on automated tests to ensure that the application is not broken when new commits are integrated into the main branch. Dev teams practicing this continuously merge their changes back to the main branch as often as possible.

Continuous Delivery — Continuous delivery picks up where Continuous integration ends or can be thought of as an extension of CI. The end goal of Continuous Delivery is to release the latest changes to customers quickly. Some teams will work with different environments and so CD ensures that there is an automated way to push these latest changes to these environments. The application can be deployed at any point by the click of a button.

Continuous Deployment — Continuous deployment is a practice in which every change that passes all stages of the production pipeline are released to customers. It is a completely automated process without any human intervention.

So, what’s the main difference between Continuous Delivery and Continuous Deployment? In Delivery, code changes are also continuously deployed, although the deployments are triggered manually. If the entire process of moving code from source repository to production is fully automated, the process is called Continuous Deployment.

Using Version-Control Branching

One method of carrying out Continuous Integration is to use version-control branching. My preferred branching strategy is Gitflow (which we’ll use in this post), but you can use other branching approaches to accomplish the same goal. A branching strategy basically helps define how new code is merged into standard branches for development, testing and production.

In the context of this post, each time we merge into or push to one of the three standard branches we’ll be using, it will trigger a new build and the source code from the branch will be used to deploy to the matching stage environment on AWS (i.e. dev branch will deploy to dev stage environment, uat branch will deploy to uat stage environment, etc.).

Create Multiple AWS Accounts

We’re going to create multiple AWS accounts that we’ll deploy our application to because it’s good practice to keep the different environments (development, testing, and production) separate. This is another good way of securing access to the production environment. If you’re unsure on how to create an IAM account, follow this guide. As you create these accounts, be sure to give each one a suffix that will make it easy to identify the environment the account is meant to be used for (i.e. my-dev-account, my-uat-account, my-prod-account).

Once you’re done with that, store the AWS credentials (Access Key Id and Secret Access Key) of the created accounts in a secure place. These credentials will be needed when setting the environment variables for our deployments in Travis CI.

Create Github Repo & SLS Projects

First step, head over to Github and create a new repository for the project. You can then create a project folder on your local machine where our lambda services will live. Using the CLI, change directories to the root folder of this project and initialize a git repo with the following command:

$ git init

You can then add the origin of the remote repository you just created with the following command:

$ git remote add origin https://github.com/your-username/your-repo-name.git

We’re going to create the following two services:

  • users-api

Initialize the new serverless projects in the project’s root folder. You can do so with the following command:

$ sls create --template aws-nodejs --path name-of-service

Be sure to swap ‘name-of-service’ with the relevant name of the service 😉. Once you’ve done that, you can go ahead and create a YAML file called .travis.yml. You don’t have to worry about what this is for right now, we’ll come back to it later on.

Now that we’ve setup our services, we’ll leave the boilerplate code for now. The folder structure of your project should look like this:

├── users-api/
├── todo-api/
└── .travis.yml

Each service should have a basic handler with the following code:

'use strict';module.exports.hello = async event => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
},
null,
2
),
};
// Use this code if you don't use the http event with the LAMBDA-PROXY integration
// return { message: 'Go Serverless v1.0! Your function executed successfully!', event };
};

Once you’ve confirmed that, you can commit the changes and push to the remote repository on Github.

$ git push -u origin master

As mentioned in the Using Version-Control Branching section, we’re going to have a branch for each environment. So you can create two addition branches (uat and develop) and push those to the remote repo as well.

$ git checkout -b uat
$ git push -u origin uat
$ git checkout -b develop
$ git push -u origin develop
Github repo for project

Connect Github & Travis CI

We want to have automatic deployments when we push to the relevant branches in our monorepo. To do this, we need to connect our Travis CI and Github accounts. If you don’t have a Travis CI account, go ahead and create one and activate the Github App Integration. Once you’ve done that, you should see all your repositories like so:

Travis CI listing Github repositories

When you find the Github project repo on Travis, you’ll notice that there are no existing builds so you should see a screen like this:

Travis CI Github repo

When you’re on the above screen, click on the More options menu in the top right corner and click on the settings option. Once you’re there, scroll down to the Environment Variables section.

Now if you recall, we created separate AWS IAM accounts for the different environments earlier on. This is where the respective Access Key ID and Secret Access Key of the IAM users will be needed. We will add the relevant environment variables for the respective branches (i.e. IAM Account for dev will only have access to the develop branch, IAM account for uat will only have access to the uat branch, etc.).

I’ll be using the following naming format for the names of the environment variables:

Dev Environment (develop branch)

AWS_ACCESS_KEY_ID_DEVELOPMENT

AWS_SECRET_ACCESS_KEY_DEVELOPMENT

UAT Environment (uat branch)

AWS_ACCESS_KEY_ID_UAT

AWS_SECRET_ACCESS_KEY_UAT

Production Environment (master branch)

AWS_ACCESS_KEY_ID_PRODUCTION

AWS_SECRET_ACCESS_KEY_PRODUCTION

Branch environment variables on Travis

Configure Travis YAML File

Earlier on we created a .travis.yml file with no content. We can finally turn our attention to this file. You can start by adding the following content to it:

language: node_js
node_js:
- "10"
deploy_service_job: &DEPLOY_SERVICE_JOB
cache:
directories:
- node_modules
- ${SERVICE_PATH}/node_modules
install:
- npm install -g serverless
- travis_retry npm install
- cd ${SERVICE_PATH}
- travis_retry npm install
- cd -
script:
- cd ${SERVICE_PATH}
- serverless deploy -s ${STAGE_NAME}
- cd -
environments:
- &PRODUCTION_ENV
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID_PRODUCTION}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY_PRODUCTION}
- &UAT_ENV
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID_UAT}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY_UAT}
- &DEVELOPMENT_ENV
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID_DEVELOPMENT}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY_DEVELOPMENT}
jobs:
include:
# develop branch deploys to the 'dev' stage
- <<: *DEPLOY_SERVICE_JOB
name: "Deploy Users API"
if: type = push AND branch = develop
env:
- SERVICE_PATH="users-api"
- STAGE_NAME=dev
- *DEVELOPMENT_ENV
- <<: *DEPLOY_SERVICE_JOB
name: "Deploy Test Experiences API"
if: type = push AND branch = develop
env:
- SERVICE_PATH="todo-api"
- STAGE_NAME=dev
- *DEVELOPMENT_ENV
# uat branch deploys to the 'uat' stage
- <<: *DEPLOY_SERVICE_JOB
name: "Deploy Test Users API"
if: type = push AND branch = uat
env:
- SERVICE_PATH="users-api"
- STAGE_NAME=uat
- *UAT_ENV
- <<: *DEPLOY_SERVICE_JOB
name: "Deploy Test Experiences API"
if: type = push AND branch = uat
env:
- SERVICE_PATH="todo-api"
- STAGE_NAME=uat
- *UAT_ENV
# master branch deploys to the 'prod' stage
- <<: *DEPLOY_SERVICE_JOB
name: "Deploy Test Users API"
if: type = push AND branch = master
env:
- SERVICE_PATH="users-api"
- STAGE_NAME=prod
- *PRODUCTION_ENV
- <<: *DEPLOY_SERVICE_JOB
name: "Deploy Test Experiences API"
if: type = push AND branch = master
env:
- SERVICE_PATH="todo-api"
- STAGE_NAME=prod
- *PRODUCTION_ENV

Alright, first things first, what’s going on in this file 🤔?

We started by creating a job template called deploy_service_job, that takes the path of a service.

The deploy_service_job does an npm install in the repo’s root directory and in the service subdirectory (hence the cd command in the config file). We also specify that we want to cache the node_modules/ directory in both the root and the service directory for faster deployment.

The environment variable values are then set for the respective environments. You’ll notice the names of the environment variables match the one’s we set in on Travis in the Settings section.

Furthermore, we have specified that a Git push to a certain branch (develop, uat and master) will deploy to the respective stage environment using the AWS credentials obtained from the environment variables.

Configure Serverless YAML File

Next up, in both the users-api service and the todo-api service, open the serverless.yml file and update each one with the following file content:

service: users-api #Update this with the relevant service namecustom:
stage: ${opt:stage, self:provider.stage}
provider:
name: aws
runtime: nodejs10.x
region: eu-west-1
functions:
hello:
handler: handler.hello

In this file, we are specifying variables based on the stage we are deploying to. We started by defining the custom.stage variable as ${opt:stage, self:provider.stage}. What exactly does this do? This tells the Serverless framework to use the stage provided by CLI option if it exists. Remember that we set the stage in our .travis.yml file. However, in the case that it doesn’t exist, it should use the stage specified by provider.stage. The stage variable is a special variable in the Serverless Framework that can be used to specify which environment you are using. You have probably noticed that I haven’t specified a stage variable using provider.stage in our YAML file. By default, the stage variable is dev.

The provider variables are relatively straightforward in terms of the cloud service provider, the runtime and selected region.

Lastly, we configure the lambda function (hello in this case) that will be used to generate the API endpoint.

Push, Build & Deploy

We’re just about done, all we need to do now is test our build pipeline to make sure all is in order. To do that, all you have to do is commit the latest changes we made to our repo and push those changes to the remote develop branch and that should trigger a build on Travis CI.

Provided the build was successful, go ahead and merge your develop branch into uat to test out that deployment. Lastly, once that is successful, go ahead and merge your uat branch into master.

Successful builds for each stage

If you sign in to your AWS Console, and go to the Lambda Service section, you should find that the lambda functions for each stage has been successfully deployed just as the Travis CI job logs should show 🙌 😀.

AWS Console (Lambda Service)

As I mentioned at the start, the source code for this demo/tutorial can be found here. I hope this has helped fill any gap(s) you may have had in terms of understanding how to setup a CI/CD build pipeline for your AWS Lambda service functions. Happy Coding!

The Startup

Medium's largest active publication, followed by +588K people. Follow to join our community.

Luke Mwila

Written by

Software Engineer at Entelect | Maker | Speaker | Cloud Advocate

The Startup

Medium's largest active publication, followed by +588K people. Follow to join our community.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade