Elastic Beanstalk deployment made simple: end-to-end automation with CloudFormation and CodePipeline

Vivek Sethia
Beck et al.
Published in
4 min readNov 9, 2020

CodePipeline is an automation tool provided by Amazon Web Services for automated deployment of applications by creating pipelines that can be used for quick and reliable infrastructure changes to AWS services, such as Elastic Beanstalk, Elastic Container Services, Lambda, S3, etc.

This article is the second in our Elastic Beanstalk made simple series. Here is the link to the first article, if you haven’t read it already, where I showed how an Elastic Beanstalk application could be created using CloudFormation. In this post, I will focus on the creation of a deployment pipeline (using CodePipeline) that will automatically deploy the changes (container image change or configuration changes) to the already created Elastic Beanstalk application. If you have followed the first article, then you already have created the S3 bucket with relevant files required for the Elastic Beanstalk application deployment. Also, the ECR repository for the container image has been created.

Before getting into details regarding the setup, let us understand the architecture. We are using a CloudFormation template (can be found here), which creates the below-mentioned architecture. It also creates the relevant IAM policies required for CodePipeline execution. The architecture consists of two stages of Codepipeline:

  1. Source stage: An ECR and S3 bucket as the source of the CodePipeline.
  2. Deploy stage: The target Elastic Beanstalk application environment for the deployment.
Architectural components for the application described in this article

For details on these stages and the CloudFormation template used for their creation, refer to the code snippets and screenshots at the end of this article.

We are using two sources, i.e S3 bucket and ECR that trigger our pipeline.

  1. S3 bucket contains the relevant configuration information required for pulling images from the appropriate ECR repository and contains .ebextensions folder containing the application configuration. This has been explained in detail in the former article.
  2. ECR repository contains the application code as a Docker image, which is deployed to the application servers.

So either of these sources can trigger the pipeline and deploy the changes to the Elastic Beanstalk application.

Note: For the following steps, you will need the relevant permissions on AWS to create (IAM) roles. I will henceforth assume that Administrator Access on AWS is available.

Steps to follow:

  1. Log in to the AWS console and navigate to CloudFormation. To quickly setup the environment, download the eks-pipeline.yaml file provided here. You can either upload this template directly in CloudFormation or first save it to an S3 bucket and provide the resulting S3 object URL.
    Note: the template is provided without support; so use it at your own risk.
  2. The following table lists the parameters required for configuring the CloudFormation template. After completely filling out the parameters, create the CloudFormation stack. This results in the pipeline being created and triggered for deploying (the existing ECR container image with an existing configuration file from S3) to Elastic Beanstalk.
Description of the parameters used for configuring the CloudFormation template

After deployment, the pipeline in the console would look as shown below.

A sample CodePipeline deployment created using CloudFormation
  1. Source:
Code snippet from the template that creates the Source stage in CodePipeline
CodePipeline stage Source as seen in AWS Console

2. Deploy:

Code snippet from the template that creates the Deploy stage in CodePipeline
CodePipeline stage Deploy as seen in AWS Console

With this article, we complete our series on end to end automation for Elastic Beanstalk applications. If you have further questions or special use cases, please feel free to comment and connect with us.

--

--