Blue/Green Deployment on AWS ECS EC2 Instances

sreehari s kumar
13 min readFeb 4, 2024

--

In the ever-evolving software development landscape, efficient deployment strategies are crucial to ensuring seamless releases and minimizing downtime. One such strategy gaining popularity is Blue/Green deployment, especially when orchestrated on Amazon Web Services (AWS) Elastic Container Service (ECS) EC2 instances. This approach allows for smoother transitions between different versions of your application, ensuring reliability and minimizing the impact on end-users.

Introduction

Traditionally, deploying updates to a live application could be a nerve-wracking process, fraught with the potential for unexpected issues and downtime. Blue/Green deployment offers a solution to these challenges, providing a methodical and low-risk approach to software releases. When integrated with AWS ECS EC2 instances, this strategy becomes even more powerful, leveraging the scalable and flexible nature of ECS to enhance the deployment process.

Types of Deployments

Before delving into the specifics of Blue/Green deployment, it’s essential to understand the broader spectrum of deployment options available on AWS. Traditional deployment methods involve updating the existing environment, often leading to service interruptions. AWS provides alternative strategies, including Canary Deployment, where updates are gradually rolled out to a subset of users, and Blue/Green Deployment, the focus of this article, where two identical environments coexist, allowing for a seamless transition between them.

Why choose Blue/Green deployment?

While Canary deployments provide a gradual rollout, Blue/Green deployment takes it further by maintaining two complete environments, ‘Blue’ (existing) and ‘Green’ (new). The advantages of Blue/Green deployment are noteworthy, particularly when compared to Canary deployments. Firstly, Blue/Green allows for a full and instant rollback if issues arise, ensuring minimal user impact. Secondly, it facilitates comprehensive testing in a production-like environment, reducing the chances of unforeseen complications. Lastly, the transition between environments is swift, minimizing downtime and optimizing user experience during deployment.

Requirements:

  • An AWS account
  • Sample code for an application
  • Dockerfile to create a docker image of your application.

AWS Services required:

  • IAM(CLI configuration, Roles)
  • VPC(Isolated network)
  • ECR(Private image repository)
  • EC2(Application Load Balancer, Target Groups)
  • ECS(Containerization)
  • CodeDeploy(Service update automation)

Let’s get started

IAM

Let’s initiate the process with IAM services. Create an IAM role to facilitate communication between AWS CodeDeploy and ECS for service updates. This role grants essential permissions, ensuring a smooth integration that allows CodeDeploy to orchestrate deployments and updates seamlessly within the ECS environment.

  • Go to IAM > Select Roles > Create role
  • Scroll down to the “Use case” section and select “CodeDeploy” from the drop-down.
  • Choose “CodeDeploy — ECS” from the list, as this role is specifically crafted for facilitating communication between the CodeDeploy service and the ECS service.
  • Click on Next.
  • Give a “Name” for the role and click on “Create”.

VPC

Creating a customized Virtual Private Cloud (VPC) for your application is a highly recommended approach. This involves setting up two Availability Zones (AZs) with two Public and two Private subnets in each AZ.

The process of creating a Virtual Private Cloud (VPC) on AWS has been significantly simplified compared to previous methods.

Search for VPC service> Create VPC

  • Refer to the screenshots below for an easy approach to creating VPC.
    Just select your desired option under each setting.
  • Once all the options are selected, click on Create VPC. And you’ll see the resources being created as per your desired selected options.
  • Explore the newly created VPC. Navigating to the “Resource Map” section provided by AWS offers users a comprehensive overview of the VPC’s network configuration.

Creating a Security Group for the ALB

  1. Navigate to the AWS Management Console and go to the VPC service.
  2. Select “Security Groups” from the left-hand navigation pane.
  3. Click on the “Create security group” button.
  4. Fill in the following details(any values can be given):

Security Group Name: ecs-ang-app-SG
Description: SG for ecs-ang-app
VPC: Select the VPC you’ve previously created.

5. Now, let’s configure the inbound rule:

Click on the “Inbound rules” tab. Add a new rule:
Type: HTTP
Port Range: 80
Source: Allow traffic from all sources (0.0.0.0/0 or ::/0).

6. Create the security group.

Elastic Container Registry(ECR)

Amazon ECR is AWS’s managed and secure container registry, offering a private repository for storing and deploying Docker container images. With tight integration with ECS and IAM, ECR ensures secure storage and streamlined deployment of container images.

  • We need to create a private image repository to store the Docker image of our application.
  • Search for ECR > Create repository. I’m naming mine “angular”, but feel free to choose any name you prefer. Create the repository.
  • You’ll be able to see the repository you’ve created.
  • Set up the AWS CLI on your local system for authentication and push the Docker image to the private image repository. Find the necessary commands by clicking on the “View push commands” option within your repository.
  • Upon uploading the image to the repository, you can view it along with its associated Image URL. This URL is crucial for container deployment.

Target Groups(TG) and Application Load Balancer(ALB)

An optimal practice involves creating target groups and application load balancers independently from the EC2 console, providing greater flexibility than using the ECS console. For blue/green deployment, establishing two target groups is essential — one for blue deployment (port 80) and another for green deployment (port 8080). The Application Load Balancer (ALB) plays a critical role in overseeing target health checks and routing traffic between the two target groups.

Let’s create our first Target Group(Blue deployment)

  • Go to EC2 service > Select Target groups > Create target group
  • Select “IP addresses” as Target type
  • Provide a “name” for the Target group. I'm going with “angular-TG”.
  • Select the protocol as “HTTP” and the corresponding port as “80”.
  • Leave the rest as default and click “Next”.
  • Confirm the VPC is selected correctly.
  • There’s no need to select any targets at this moment. Targets will be dynamically registered once the cluster is established. Go ahead and create the Target group.
  • Similarly, create another Target group for the green deployment. Name it as “angular-new-TG” with port “8080”.

Let’s create an ALB now.

  • Go to EC2 service > Select “Load Balancers” > Create Load Balancer> Select “Application Load Balancer”.
  • Give a “name” for the ALB and make sure “Internet-facing” is selected.
  • Choose the previously generated VPC from the dropdown menu. Opt for the subnets in both Availability Zones (AZs) to enhance availability. Ensure the selection of “Public subnet” since the ALB requires public access.
  • Select the “Security Group” we’ve created for this project.
  • In the “Listener and Routing” section, make sure to select the 80 port to forward traffic to the “angular-TG” group.
  • Add another listener port 8080 to forward traffic to the “angular-new-TG” group.
  • That it. Now create the ALB. It will take some time for the ALB to become “Active”.

Elastic Container Service(ECS)

Now, let’s move on to the core segment of this project.

Amazon Elastic Container Service (ECS) is a fully managed container orchestration service offered by Amazon Web Services (AWS). ECS enables users to run, stop, and manage Docker containers on a cluster, providing a scalable and efficient solution for deploying containerized applications. It seamlessly integrates with other AWS services, offering flexibility in building and scaling applications while abstracting the complexity of container management. ECS supports both Fargate, a serverless compute engine, and EC2 instances, allowing users to choose the deployment option that best suits their needs.

Here, we will proceed with ECS EC2, as comprehensive documentation for this option is relatively scarce on the internet. I’ll guide you through the entire process — just stick with me.

  • Search for ECS service.

1. Cluster

Amazon ECS (Elastic Container Service) Cluster is an AWS-managed container orchestration service for deploying and scaling containerized applications efficiently. It integrates seamlessly with AWS services, providing flexibility to run containers on EC2 instances or the serverless compute engine, Fargate. ECS simplifies container management, ensuring a scalable and reliable environment for containerized workloads.

  • Select Clusters from the left pane > Create cluster
  • Give a “name” for your cluster.
  • Select “Amazon EC2 Instances” as your Infrastructure option.
  • To achieve cluster scaling, it’s essential to “create an autoscaling group”, as it is responsible for dynamically adjusting the size of the cluster.
  • Select the Operating system, EC2 instance type and the Desired capacity as per your requirement.

The minimum and maximum numbers in an autoscaling group define the range of instances that the group can maintain. The minimum specifies the lower limit, ensuring a baseline capacity is always available, while the maximum sets an upper limit, preventing the group from exceeding a specified size. These parameters allow the autoscaling group to automatically adjust the number of instances within the defined range based on demand, optimizing resource utilization and application performance.

  • To access the EC2 instance, it’s necessary to have an SSH key pair. You can either choose an existing key or generate a new one. As I don’t have a key, I’ll proceed to create a new one.
  • Give a name for your key, and leave all other options as default. Then select “Create key pair”.
  • Returning to the cluster configuration, you can locate your SSH key in the available drop-down options.
  • In the “Network” section, choose the previously created VPC. Note that I’ve opted for only the “private subnets”, intending to limit public exposure to my application. However, you have the flexibility to launch your application in public subnets if desired.
  • Additionally, “create a new Security Group” to enable traffic to your application. In this setup, I’m permitting ports 22, 80, and 8080 for SSH, blue deployment, and green deployment, respectively.
  • Auto-assign public IP is “turned off” as the instance is launched in a private subnet.
  • Now, create the cluster.

You might be curious about the behind-the-scenes construction of the cluster with the provided options. AWS utilizes the CloudFormation service, an infrastructure-as-code solution managed by AWS, to build the cluster according to your specified settings.

  • Once the events in CloudFormation are completed, you’ll be able to view the newly created cluster.
  • Observe the launch of an EC2 instance from the previously created Auto Scaling group. This instance is initiated to meet the minimum number defined by the scaling group. The container we generate will be deployed onto this instance.

2. Task Definition

An ECS Task Definition is a blueprint for defining and configuring containers within an ECS service. It specifies essential parameters such as container images, resource requirements, networking details, and dependencies. Task Definitions enable precise control over the execution environment for each container, allowing for seamless deployment and scalability of containerized applications on the ECS platform.

  • Select the “Task Definition” option from the left pane of the ECS service.
  • Select the “Create New Task Definition” option.
  • Refer to the following screenshot for the Task Definition configuration.
  • Task definition family holds the “name”.
  • Under Infrastructure requirements, select “Amazon EC2 Instances” for Launch type.
  • You can leave the OS, Architecture and Network mode as default.
  • Specify the “Task size” for the container.
  • You need to specify the Name, Image URL, Container port and Protocol.
  • Now, create the Task Definition.
  • Let’s move on to create a “Service” from the Task definition.

3. Service

In an ECS cluster, a service is a fundamental construct that defines how tasks should be deployed and maintained. It ensures the desired number of tasks are running, automatically replacing failed tasks and distributing them across the cluster. ECS services enable the long-term operation and scaling of applications, providing a resilient and scalable foundation for containerized workloads.

  • Select the cluster we’ve created.
  • Select “Launch type” for Compute configurations. Also, make sure the “EC2” is selected from the drop-down.

You can see that the Application type and the Task Definition version will be selected as default.

  • Give a “name” for the service.
  • Select “Replica” under the Service type.
  • Give the number of “desired tasks” to be run.
  • Expand the “Deployment options” and select “Blue/Green deployment” as the deployment type.
  • Select “CodeDeployDefault.ECSAllAtOnce” from the Deployment configuration.

CodeDeployDefault.ECSAllAtOnce is a deployment configuration in AWS CodeDeploy for ECS, where all tasks in a service are replaced simultaneously. This strategy results in a faster deployment but may cause a brief service interruption. It's suitable for scenarios where a short downtime is acceptable during updates.

  • Select the IAM role we’ve created initially from the drop-down option.
  • Ensure the correct selection of the VPC, subnets(private), and security groups that have been created.
  • Choose “Application Load Balancer” from the drop-down of the Load balancer type options.
  • Select the “Application Load Balancer” which we created earlier.

Here, comes the tricky part!

  • In the “Listener” section, confirm that “80” is assigned as the Production Listener, and “8080” is designated as the Test Listener.
  • In the “Target groups” section, confirm “angular-TG” is selected as Target group 1, and “angular-new-TG” is chosen as Target group 1.
  • Create the service.
  • It may take a few minutes for the Task to be up and running.

Now, let’s attempt to access the application and validate its functionality.

  • Copy the DNS name of the Load balancer we’ve created.
  • Paste the copied DNS name onto a browser and hit “Enter”.

Bravo! The blue deployment is working fine.
Now, let’s test the green deployment.

To inspect the green deployment, let’s make a minor modification to the application code, build a Docker image with the updated code, and then push the new Docker image to the ECR.

  • After successfully pushing the latest image to ECR, proceed to create a revision of the current Task definition version.
  • Given that there are no configuration changes and the image URL remains the same, generate a new revision without altering any values.
  • Next, initiate an update to the service to trigger a deployment, launching a new container from the latest image recently pushed to ECR.
  • Select the “Cluster” and “Service” from the drop-down correctly.

Following the recent update, deployments are handled by the CodeDeploy service. The application name, deployment group name, and deployment configuration are automatically generated by CodeDeploy. In the event they are not created, manual creation is necessary from the CodeDeploy service page.

  • Click on “update” once all the details are added.
  • You’ll be able to see a “deployment ID” in the pop-up.
  • Navigate to the “Deployments” tab on the service page to observe both “blue” and “green” deployments. The blue deployment will be automatically removed once the green deployment is successfully operational.

We’ve reached the final step of verifying the green deployment. Copy the DNS name of the ALB as done previously, paste it into a browser, and hit “Enter”.

Please leave a comment below if you notice any differences in the application compared to its previous state.

Conclusion

In the dynamic landscape of rapid application deployment, Blue/Green deployment on AWS ECS EC2 stands out as a game-changer, offering a seamless and risk-mitigated update approach. Embrace this method to enhance deployment practices, satisfy users, and stay ahead in cloud computing. Hope this article was helpful to you!

--

--