CodePipeline for Serverless Applications With CloudFormation Templates

Andrei Diaconu
The Startup
Published in
11 min readOct 1, 2020
CodePipeline through CloudFormation with IAM Roles

Introduction

The CI/CD process of an application is crucial to its success and having an easy way to maintain said process can lead to an even healthier lifecycle for the application. Enter infrastructure as code: a reliable method of managing and provisioning pretty much anything you would need in a CI/CD pipeline to get it going, through the use of templates and definitions. Ever needed to keep the configurations of an AWS pipeline somewhere so you don’t need to remember the clicks from the Management Console by heart? Or maybe you wanted to give a working example to a colleague, so they can build their own pipeline. These problems and many more can be solved through infrastructure as code and CloudFormation, if we’re talking AWS.

In the following lines, we’ll go through everything you need to create your own pipeline, connect it with other pipelines and maintain them only by running a not that complicated bash script. And by the end, you’ll probably come to realize yourself how awesome infrastructure as code is (no need to thank us).

Table of contents

  1. Simple Codepipeline using CloudFormation, The Serverless Framework, and IAM Roles
  2. Complex Codepipeline(s) with AWS Templates and Nested Stacks
  3. Automation through scripting

Just want to see the code? Here you go:

  1. Simple pipeline CodeFormation template
  2. The Serverless Framework catch
  3. Template with nested stacks
  4. Automating template packaging/deployment through scripting

CodePipeline using Cloudformation

Building a pipline in AWS’s CodePipeline is pretty simple and conceptually similar to other CI\CD tools out there, but it’s also quite verbose, requiring a considerable amount of code, especially if you want to build your pipeline in an infrastructure-as-code approach.

From a high level there’s 5 main types of resources we need in order for us to put together an AWS Codepipeline:

  1. A S3 bucket resource — where our code artifact will be stored
  2. A CodePipeline resource— this will model what steps and actions our CodePipeline will include and execute
  3. IAM Roles — one role that CodePipeline will assume during its execution, in order to create/update/deploy the resources in our codebase. A second role used by a CodeCommit webhook(see #5).
  4. CodeBuild Project(s) — are used by the CodePipeline resource to execute the actual commands we want in our pipeline.
  5. An Event Rule — an AWS Event Rule that will act as a webhook triggering the pipeline on each master branch change(this is only required when working with CodeCommit. If you use Github or other supported repo provider there are build-in webhooks)

Now we’ll go over each of the resource types and then we’ll put it all together in a complete CodePipeline.yml definition for a serverless application built on AWS Lambdas using the Serverless Framework(but not limited to these).

S3 bucket definition

You can consider this the code artifact store for our pipeline. This will be referenced and used later on by the pipline itself to first upload the artifact and then use it to deploy the resources.

The definition is pretty straight forward:

Definition of the DeployBucket — our code artifact store

CodePipeline definition

This will be the bulk of our pipeline definition code that will define what are the stages of our pipeline.
The top level properties of a CodePipeline resource definitions are:

  1. ArtifactStore — this is where we’ll reference the S3 bucket that we’ve created earlier. We will do this by using AWS’s !Ref intrinsic function
  2. RoleArn — this is where we’ll reference the Role that the CodePipeline will assume during it’s run. We’ll do this by using the !GetAtt intrinsic function
  3. Stages — a list of actions that our pipeline will do.

A high level view over our CodePipeline definition will look like this:

High level overview of the CodePipeline definition

Notice that, for the sake of simplicity, we’ve left out the definition of the actual stages, which we’ll cover later on. But as you can see, in our case, for a serverless application, the stages of our pipeline will be:

  • Source — retrieving the sources from a supported git repo(CodeCommit, GitHub, Bitbucket) or an S3 bucket(this is useful if you’re not working with one of the supported git repos — but this implies having to upload your code to the bucket by your own means). We’ll stick to using a git repo since this is the most common scenario.
  • Staging — reference to the Deploy-to-Staging CodeBuild project which will contain the actual deploy commands.
  • Promote to Production manual approval gate — this will prevent an automatic deploy to Production each time the pipeline runs.
  • Production Deploy to Production — reference to the Deploy-to-Production CodeBuild project which will contain the actual deploy commands.

CodeBuild project

CodeBuild is the equivalent of build runners from other CI\CD tools. In our case we’ll use their ability to run CLI commands in order to deploy our application. The definition of a CodeBuild project that deploys our serverless + nodejs app to the staging environment/stack looks like this:

Definition of DeployToStaging CodeBuild project

The most important part of a CodeBuild project is it’s buildspec.yml file which defines the actual CLI commands that the project will execute. You can see it being referenced in the Source property. The buildspec.yml file(located in the root of the project) looks like this:

buildspec definition for a CodeBuild project

Putting all the pieces together here’s how the definition of the stages inside our CodePipeline definition will look like:

CodePipeline stages declaration

IAM Roles

As mentioned earlier, we’ll create 2 IAM roles. One to be used by CodePipeline itself and one used by the CodeCommit webhook that we’ll create in a future step.

CodePipelineServiceRole

An IAM role is just a collection of policies or access rights that a certain resource is allowed to have. In our case this is the place where we define the access limits of the CodePipeline. This is important because, according to the AWS Well-Architected framework, a resource should be limited to accessing only the resources it’s supposed to access, as per the principle of least privilege. In other words we should avoid defining IAM policies that give unnecessary privileges, like the one bellow:

this will allow access to any action of any resource in our account. Don’t do this!

and instead strive to limit the access boundries through policies that look like this:

This will limit access to only the specific action that we really need for the specific resource that we need. Do this!

There is no silver bullet when creating a CodePipeline IAM role, because the policies(access rights) of the role will differ based on the actual resources that you use in your project. There are some policies that will be required regardless of your setup as you can see below:

CodePipeline starter role. Use this and expand to fit your specific setup

You can use the above definition as a starting point. Run your pipeline and add more permissions to the role based on the error messages received from AWS. Does your setup containing lamdas? Then probably you should add some lambda permissions. Dynamo db? Then add the necessary dynamo db permission. It’s a bit of a tedious process but it will add to the security of your environment. You can also use this to speed up the process: https://policysim.aws.amazon.com/home/index.jsp?#

CodeCommit webhook(optional)

At this point, our CodePipeline is ready. The only thing missing is its ability to run on every change of the master branch. For this we need just two things: an Event Rule and a Role. The “webhook” we’re creating is actually an Event rule that listens for CloudWatch events emmited by CodeCommit and triggers our pipeline whenever there’s a change an the master branch(or any branch, for that matter). Fortunately, these are less verbose and look like this:

Cloudwatch webhook role definition
CloudWatch webhook definition

Deploying the pipeline

At this point you should have everything ready and we can deploy the pipeline. This is pretty straightforward and can be done by running the following command:

$ aws cloudformation deploy --template-file <my-pipeline-template>.yaml --stack-name <my-stack-name> --capabilities CAPABILITY_NAMED_IAM --region <desired-region>

Note: the — capabilities CAPABILITY_NAMED_IAM is just an acknowledgment that you are aware that the template file will create named IAM roles.

The Serverless Framework catch

If you’re using The Serverless Framework there’s some changes that you have to do in the serverless.yml file. Normally, your serverless.yml file looks something like this:

Notice the profile:default property — this dictates which user/profile will be used by The Serverless Framework. The default profile is usually taken from the .aws/credentials file or environment variables. The default profile is usually an admin user with full privileges.

But once the pipeline tries to deploy the serverless framework stuff, the default profile means nothing, because there’s no credentials config file in our pipeline.

So we have to make use of the cfnRole property offered by The Serverless Framework. This property accepts an IAM role ARN as value, and uses it when deploying the AWS resources. So we just have to put the ARN of the role we’ve created earlier in the cfnRole property, remove the profile property and we should be set. (This means, that we’ll need to deploy our pipeline template in order to create the role, find its ARN, and update the serverless.yml file)

See below the cnfRole property working along with the profile property by using the serverless-plugin-ifelse. This makes it work on staging/production environments/stacks(when CodePipeline does the deployment) as well as development stacks(when you want to deploy your stack from your development machine).

serverlessIfElse for dinamically switching between cfnRole and profile

Multiple CodePipelines using AWS Templates and Nested Stacks

We have seen how we can declare various resources and tie them together into a fully functional, independent pipeline. But what if we want to build a much larger template through combining multiple smaller ones? Or we want to group similar resources, such as roles, lambda configurations or policies? AWS’s answer to these questions is simple: Nested Stacks.

Nested Stacks are stack resources part of bigger stacks. They are treated as any other AWS resource, thus helping us avoid reaching the 200 resources limit of a stack. Also, they offer functionalities such as input and output parameters, through which the stacks communicate between themselves. In addition, the use of Nested Stacks is considered a best practice, as it facilitates reusability of common template patterns and scale well with larger architectures (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html).

So how can we make use of these fancy, well-regarded, Nested Stacks? Simple! Treat each of the 3 pipelines as an independent nested stack. Since they don’t contain that many resources (now or in the near future, for that matter), segregating them by type doesn’t offer that big of an advantage and thus, the nested stacks won’t even require the use of input and output parameters, since they contain everything they need inside their own definition. The 3 nested stacks would reside under a parent one, used as a single point of access whenever changes are made to any of them.

What needs to be done, then? There are 3 steps we need to follow in order to build our template with the use of nested stacks:

Define a root template

This file will reference all the other templates using the physical path

Root template file

This file is a simple collection of AWS::CloudFormation::Stack resources, referencing each template through their physical path. Here, you can also define the input/output parameters mentioned earlier, which are values passed between stacks when they need to communicate, but for us, it’s not the case.

Package the template

Once we’ve finished configuring our Nested Stacks setup, we can start combining them. Currently, Amazon only supports combining templates into larger ones using S3 buckets. The local files are uploaded to an existing bucket (passed as a parameter) and a new file is generated, this time containing not the physical path, but the S3 location URL

Running this command:

$ aws cloudformation package --template-file /root-template-file.yml --s3-bucket bucket-name --output-template-file my-packaged-template.yml
  • — template-file — the root template file, containing the physical paths
  • — output-template — the name and location of the newly generated template, containing the S3 paths
  • -s3-bucket — the name of the bucket used for packaging the files

will result in something like this:

packaged template file

Note: The S3 bucket used for holding these files needs to already be created, before the execution of the “package” command. Also, one needs to be careful not to include the definition of the deployment bucket inside any of the templates, since this would lead to a circular reference.

Deploy the template

Once the output template is generated, we can safely deploy it. CloudFormation will look for the specified files in the S3 bucket and create/update the root stack and, implicitly, the nested stacks. If the stacks already exist, they are evaluated based on change sets and if any differences are found, CloudFormation updates only the ones that were modified.

The deploy command goes like this:

$ aws cloudformation deploy --template-file /my-packaged-template.yml --stack-name my-stack-name --capabilities CAPABILITY_NAMED_IAM --region my-region

And that’s about it. You should now have 3 different piplines created by your template. Not the smoothest process, but pretty straightforward, nonetheless. A possible solution could be automating this whole endeavor through a script. In the following section, we will see exactly how we can achieve this.

Automating the package/deploy process

As we’ve seen earlier, there are a couple of steps which we need to do in order to get our nested stack template packaged and deployed. Unfortunately we have to go through all of the above process each time we modify our pipeline.

Besides being really cumbersome, doing all of these steps manually is not recommended as it is prone to errors. After all, you’re creating an automated CI/CD pipeline in order to reduce the amount of work you have to do, not add to it. If you’ve reached this point and you’re asking yourself “do I have to do all of that every time I want to deploy my pipeline?”, then don’t worry because the answer is no. But how can we avoid this hustle and automate the entire process? The solution? Bash scripts to the rescue!

Using a bash script we can achieve the same result as manually deploying the pipeline(s), without giving yourself a headache. Take a look bellow at an example of a simple script that does everything we need:

simple bash script

While this would probably work just fine (assuming the bucket exists and the template is valid), it’s a good idea to follow certain conventions regardless of the programming language you use. Let’s take a look at how we can improve our script a bit:

complex bash script:)

The above bash script does a couple of things:

Make sure the bucket exists

Because of the way nested stacks work, we need to have an S3 deployment bucket where our templates will be stored to be later used by the root stack in the deployment process. Therefore the first thing we need to do is to ensure that the bucket exists. The head-bucket command (aws s3api head-bucket --bucket $bucketName ) is perfect for this because it determines whether the bucket exists and if we have permission to access it.

Validate the template

The next step is to make sure that the template that we’re going to deploy is valid. To do this we can use the validate-templatecommand (aws cloudformation validate-template — template-body $pathToTemplate) which, if the template is not valid, will return a error message detailing what is wrong with it. Once we confirmed that the template is good, we can move forward and deploy it.

Package the template

The aws cloudformation package — template-file $pathToRootTemplate — output-template $pathToOutputTemplate — s3-bucket $bucketName command returns a copy of the root template, replacing the references to the local template files with the S3 location where the command uploaded the artifacts. So basically, it sets us up for the next step, the actual deployment.

Deploy the pipeline(s)

After all this setup, we can finally deploy our pipeline(s). We do this with the deploycommand (aws cloudformation deploy — template-file $pathToOutputTemplate — stack-name $stackName — capabilities CAPABILITY_NAMED_IAM — region $region), which uses the template that was generated by the package command in the previous step to create our (pipeline) resources. If this step succeeds, the pipeline(s) resources will be created in the specified stack.

And that’s it. We now have a script that does all the heavy lifting for us. All that’s left is to add the script to your package.json’s scripts section and you’re all set.

Conclusion

Quite a ride, wasn’t it? We’ve seen how to write the definition of an AWS pipeline and all its components, we’ve rigged a bunch of them together using CloudFormation and Nested Stacks and finally we’ve automated the whole process through the use of a bash script. Hopefully all of this came in handy and helped you avoid too many of those pesky configuration item changes on AWS, when building your own pipeline (guess what gave us an unusually hefty bill at the end of the month).

If you have any feedback for the article or the code presented in it, please leave us a comment or send us an email, every thougt and idea is appreciated.

Thanks for reading and happy coding,
Andrei Arhip
Tudor Ioneasa
Andrei Diaconu

--

--

Andrei Diaconu
The Startup

Software Engineer keen on automating stuff, learning new things, and sharing knowledge.