Serverless Web Applications with CI/CD in AWS

Earl Gay
7 min readAug 12, 2017

--

Serverless applications integrate managed services such as C0gnito, S3, DynamoDB with event-driven Functions-as-a-Service solutions — AWS Lambda — to deploy applications in a manner which doesn’t require managing servers.

Serverless computing allows you to build and run applications and services without thinking about servers. Serverless applications don’t require you to provision, scale, and manage any servers. You can build them for virtually any type of application or backend service, and everything required to run and scale your application with high availability is handled for you.

There’s a lot of awesome benefits of serverless — including dramatically reduced costs of solutions. But the real power of serverless is unleashed when developers start to worry less about their infrastructure — and focus more attention on services to that differentiate their applications.

An attendee from A Cloud Guru’s recent Serverlessconf in Austin highlighted that developers are most excited about sharing the solutions they are creating with serverless architectures. That’s the power of serverless — allowing developers to focus on delivering business value.

The enthusiasm for serverless doesn’t mean it’s easy. It takes time to shift to a new approaches and architecture patterns, and there are plenty of challenges with operations and observability.

There are plenty of deep-dive sessions from Serverlessconf 0n YouTube which dive into the challenges with serverless which are well worth the watch. For the live show, the upcoming serverless conference will be hosted in NYC from October 9–11.

Getting Up to Speed with AWS Labs

To developers get up-to-speed on serverless, AWS released some tutorials focused on building and managing serverless platforms. There are a couple great examples written by AWS labs:

The website created using the “Build a Serverless Web Application” Learning Path
The SAM the Squirrel Farm deployed within the CI/CD Pipeline

Extending the Examples

Both of the AWS Labs samples are awesome on their own, but I thought they complemented each other very well, too, so I thought it made sense to combine them. I had a couple of goals when combining the projects:

  1. Simplify the templates and pipeline so they could be easily deployed as starter kits for projects with minimal tweaks (e.g. remove a lot of the customization around “WildRydes” in the Website and switch from Lambda configuring Cognito to using CloudFormation directly).
  2. Make the entire process as automated as possible as part of the pipeline (e.g. integrate updating the website itself into the pipeline instead of just the serverless backend).

As a result, I combined the two projects into the following project: https://github.com/eeg3/serverless-pipeline. For a full list of modifications from the parent projects, check out the Modifications from Original Sources section of the README.

You can find all of the code, instructions, details, modifications, and notes on the GitHub repository. The end-result is a Serverless Web Application template that can be fully deployed into a CI/CD pipeline with minimal effort.

Serverless Services Used

This project leverages the following AWS services as part of its serverless stack:

  1. CloudFormation: Used to deploy the entire stack.
  2. AWS Serverless Application Model: Used to provision Lambda/API Gateway.
  3. S3: Used to provide static website hosting and to store our build artifacts.
  4. Lambda: Used to perform Functions-as-a-Service.
  5. API Gateway: Used to provide an integration point to our Lambda functions.
  6. Cognito: Used to provide authentication for our website.
  7. IAM: Provides security controls for our process.
  8. CodePipeline: Used to provide the pipeline functionality for our CI/CD process.
  9. Code Build: Used to build the project as part of CodePipeline process.
  10. GitHub: Used as the source code repository. Could theoretically be replaced with CodeCommit.

Functionality

The process starts by deploying the deploy.json file inside CloudFormation. The deploy.json template will ask for the following parameters:

The Combined Pipeline CloudFormation Template Parameters
  1. AppName: Name of the S3 Bucket to create that should house the website. This must be unique.
  2. CodeBuildImage: Name of the CodeBuild container image to use. Default should be fine, but customizable if desired.
  3. CognitoPool: Name of the Cognito Pool to create to use for authentication purposes.
  4. GitHubRepoBranch: Branch of the GitHub repo that houses the application code.
  5. GitHubRepName: Name of the GitHub repo that houses the application code.
  6. GitHubToken: GitHub token to use for authentication to the GitHub account. Configurable inside Github: https://github.com/settings/tokens. Token needs repo_hook permissions.
  7. GitHubUser: GitHub Username.
  8. SAMInputFile: Serverless transform file. By default, this is the included sam.json file.
  9. SAMOutputFile: The filename for the output file from the buildspec file. This doesn’t need to be changed unless the artifact file inside the buildspec.yml file is changed to a different name.

The files referenced (e.g. SAMInputFile) are expected to exist within the GitHub repository. The CloudFormation deployment will warn that it is creating IAM permissions. This is because it creates roles and policies for the pipeline to use.

The initial CloudFormation Stack will be created after deploy.json is launched. Once that first stack is created, the CodePipeline will then create the pipeline stack after a period of time. The pipeline stack will be called {parent-stack}-serverless-stack.

The Foundational Services Stack and the Pipeline Stack

After initial deployment, the site will not be fully functional as the config.js file still needs to be updated so the site knows how to utilize the services. Within the parent Stack, the Outputs tab should display the following items: UserPoolClientId, BucketName, UserPoolId, OriginURL.

The Outputs from the Foundational Stack that are used within the Web Site

Within the child pipeline Stack, the Outputs tab should display the following items: ApiUrl.

The API Gateway URL created through the Pipeline that is used within the Web Site

The UserPoolClientId, UserPoolId, OriginURL, and ApiUrl should all now be placed into the website/js/config.js file so that the website knows how to use the services provisioned; this is a one-time process. Once the config.js file is updated, push the change to the GitHub repo; this will automatically update the application with the new config through the pipeline.

The site should now work as expected. Browse to the URL defined within OriginURL, and select "Register" from the top right drop-down. Enter an email address and password, and select Register. You will receive a verification code from Cognito. Once received, select "Verify" from the top right drop-down; on the verify page, enter your email and verification code provided.

The Base Template Website created through the Pipeline

Try browsing to the “Squirrel Farm”. If you have not logged in, you should be redirected to the Sign-In page. Enter your credentials, and you should now be taken to the “Squirrel Farm”. Within the graphic in the middle of the page, one SAM Squirrel should initially be displayed, and then it should increase to 15 shortly thereafter, once it has reached out and asked API Gateway/Lambda how many it should display.

Look at all those SAM Squirrels!

You can also browse directly to the API Gateway/Lambda function and see how many Squirrels should be displayed. You can do this by browsing to the “ApiUrl” listed in the pipeline Stack and appending ‘/sam’ to the end (e.g. https://od3tfr5l1a.execute-api.us-east-1.amazonaws.com/Prod/sam). By default, it should return ‘15’. The website uses this return value to decide how many squirrels should be in the farm.

As the website or serverless function is updated, simply perform the modifications and then push them to the GitHub repo. Once checked in to Github, CodePipeline will handle the rest automatically. To test this functionality, browse to the CodePipeline page and view the pipeline while pushing a change. The pipeline will show the process from Source -> Build -> Deploy. If there are any failures, they will be visible within the pipeline.

The CodePipeline that keeps the Serverless Application Continuously Delivered

Start Creating!

Many thanks to the awesome folks at AWS Labs for creating the great samples that are used within this template; they deserve all the credit! I hope you find the combined template code useful as you start working with Serverless Applications.

There are lots of great resources out there around Serverless and the community is exploding. Check out the following other awesome tutorials as you continue on your Serverless journey!

--

--

Earl Gay

Customer Engineer, @GoogleCloud | Mobility, Cloud, and Random Technology | Posts are mine and don’t represent my company.