Continuous deployment to AWS ECS from CircleCI

Bogdan Frankovskyi
SavvyClutch Engineering Club
9 min readMar 23, 2019

You know what it all about. So let’s start.

We have some kind of website with the dockerized environment and we want to configure automatic zero-time deployment on the push to our Github repository master branch.

What we a going to use to do that:

  1. CircleCI as a build server
  2. Github as a code repository
  3. Amazon EC2 Container Service (ECS) as a production environment and our deployment target
  4. Amazon S3 bucket to keep our secret keys used by the website

CircleCI

It’s a great building service integrated with Github. Very easy configurable by yaml file — just put circle.yml into your repository, configure dependencies, test and deploy sections and that’s it.

Amazon EC2 Container Service

AWS ECS organize cluster from multiple Amazon machines (AWS EC2 instances). To run container on the cluster from your docker image you should define Task — which is just resources definition for your container instance, like which docker image to use or how much memory to allocate for container etc. Tasks runs on cluster instances, and which Tasks and how many Tasks should be are defined in Service. Also,Service settings allow configuring which ELB (Amazon load balancer) to use for current cluster and keep the rules for Auto Scaling - in case you want to change a number of containers dynamically.

To deploy a new version of your docker image you should create a new Task Definition with new image tag inside, register it in Service and run Service update after that to populate new Task.

On Service update, new Task will be run on any free reserved EC2 instance, and, after that, old Task will be stopped.
Unfortunately, if you want to have smooth updating without maintenance period, you can’t just run Task with new image version on the same machine because it uses the same port as previous one, so to do that you should shut down the old container before running of new one. Which means you should keep one EC2 instance on cluster free, for deployment purposes.

Amazon EC2 Container Registry (ECR)

It’s a docker registry like Dockerhub, but managed by Amazon and have more options on access management. We will use it because of speed and security reasons.

S3 bucket

A common problem people face during deployment with docker is how to handle security info like database credentials, or third-party API keys, passwords and tokens.

Including this information into docker image is not secure enough because it can be retrieved from docker cache or your infrastructure will be compromised if someone got access to your docker repository or image. Anyway, to include these variables to the image you should provide them to the build server or commit it into your code repository which is also not a good choice to keep secured info, especially in case you use third-party build services. Some people add this information to the ECS Task definition, but it’s also required to store this info inside build server environment or code repository.

We are going to use an S3 bucket with the special access rules based on AWS Identity and Access Management (IAM) role to handle this. In this case, environment variables with sensitive data can be retrieved only from inside of our VPC.

What we are going to do

So, our detailed deployment process will look like this:

Simple, heh?

And the implementation steps:

  1. Create a repository for docker images
  2. Create a task template for ECS service
  3. Configure secrets bucket
  4. Configure docker image
  5. Create cluster
  6. Configure cluster Service
  7. Configure CircleCI

Create a repository for docker images

Go to the Amazon EC2 container service -> Repositories in your AWS Console. Click Create repository and enter the repository name. I suggest to use a more specific name, like projectname-server. Save Repository URL.

Create a task template for ECS service

Now we need to create a template for Task Definitions. We will create new Task Definition after building a new docker image using this template.

Replace <Repository URL> with docker repository URL from the previous step. <TASK FAMILY> we will replace later with the correct Task family option. You can choose any <TASK NAME> you want (website, for example).

On deployment, our deploy script will replace %IMAGE_TAG% to the real one and produce new Task Definition for Service. memory, portMappings and cpu options are self-explained, but make sure that you allocate enough memory for the container.

Configure secrets bucket

Create bucket on S3 with name ecs-secrets and add the following policy to the bucket (Properties -> Permissions -> Edit bucket policy)

This policy allows to put only encrypted files inside and allow to get files from specific VPC only.
(Assume you already have configured VPC. If not — you can create a new one on Create cluster step and use this new VPC for bucket policy).

To upload a new file to the bucket you can use the following command with aws CLI (AWS command line tool):

aws s3 cp website_secrets.txt s3://ecs-secrets/website_secrets.txt --sse

Put your secret environment inside website_secrets.txt, for example:

SECRET_KEY_BASE=adsafasdfsafwfwefdsfsdacwaeewfdadsfasdfewceascadcasdcdsadceeas DB_HOST=db_host_adress DB_USER=dbuser DB_PASSWORD=supersecretpass

Also, I suggest enabling logging for this bucket — just in case.

UPD. Now we can provide access to this bucket for specific Task only, which is more secure than giving access from VPC.

Configuring docker image

Now we need to configure our docker image to load secrets from S3 into container environment on container start. We will use endpoint script for this. It will load each line of the website_secrets.txt into container environment, so all environment variables will be accessible by the webserver. Create following secrets-endpoint.sh inside your repository:

Don’t forget to add execute permissions to script.

$ chmod +x ./website_secrets.txt

As you can see, this script use aws CLI to download file with secrets, so before running it in the container we should install aws CLI into docker images. Change your Dockerfile and add the following lines to install aws CLI inside docker image (example for Debian-based distro images, like Ubuntu):

and, we need to put endpoint script to the image and run it right before CMD line in Dockerfile.

Create cluster

For cluster creation, I suggest using ECS CLI. There are two reasons to use ECS CLI instead of AWS CLI — first of all, it more simple to use. The second reason — ECS CLI setup cluster by Cloud Formation which will manage cluster resources — EC2 instances, VPC, security groups etc. So we don’t need to do that manually. Additionally, it allows using docker-compose yaml file for Task Definition creation. We are not going to use this feature, but it can be helpful in other scenarios.

So, install the ECS CLI and create a configuration file for it (on Linux it will be ~/.ecs/config). Example config file:

Choose your cluster name at that point. Usually, I create the name by the pattern projectname-formation to add information on how cluster resources are managed (Cloud Formation in this case).

To create a cluster on existing VPC run the following command:

$ ecs-cli up --keypair <key here> --capability-iam --size 2 --vpc <VPC ID> --subnets subnet-<SUBNET 1 ID>,subnet-<SUBNET 2 ID>

Keypairs are stored on AWS EC2 -> look to the left panel -> Key pairs. There you can create a key pair to access EC2 instances.

If you still have no VPC ecs-cli up will create a new one without the--vpc option.

You should add a minimum of 2 instances into your cluster in order to have one machine free as a deployment target.

Configure cluster Service

After cluster creation, we should create Service in the cluster to manage our Tasks. Before doing that, let's push the latest docker image to the ECR repository and register a new Task for this image.

Login to the ECR:

$ aws ecr get-login --no-iclude-email --region us-east-1 | sh

Build a new image

$ docker build -t my-website .

Tag image with ECR tags

$ docker tag my-website <ECR Repository URL>:1 $ docker tag my-website <ECR Repository URL>:latest

Where <ECR Repository URL> - URL to the project docker repository (looks like 1234567890.dkr.ecr.us-east-1.amazonaws.com/my-website)

Push images to ECR

$ docker push <ECR Repository URL>:1 $ docker push <ECR Repository URL>:latest

Now, let’s create a new Task Definition and register it on ECS:

$ sed -e "s;%IMAGE_TAG%;1;g" ecs-task-template.json > my_website-1.json $ aws ecs register-task-definition --family <TASK FAMILY> --cli-input-json file://my_website-1.json

Where <TASK FAMILY> can be any name you want to group your Tasks. Usually, I use project_name, like my_great_website.

Ok, now we can create Service. Go to the ECS -> your cluster -> Service tab -> click Create button.

Choose yours Task Definition, add service name (website) and set 1 task in the field Number of tasks. Service should run container on one of the registered instances.

Let’s make sure that everything works. Go to the Tasks tab and click on the value in Container Instance column for the active task. You should be able to see Public IP. Try to open it in the browser and check if everything works as expected.

Troubleshooting

If something goes wrong, you can check your container in the instance. To do that, at first, you must allow an SSH connection to the instance.

  • Go to the instance on EC2 (you can do it from Container Instance page)
  • Description -> Security groups
  • Click on current security group
  • Go to Inbound
  • Click Edit
  • Add a rule for SSH

Connect to the instance:

$ ssh -i your_keypair.pem -o 'IdentitiesOnly yes' ec2-user@<INSTANCE IP>

where <INSTANCE IP> is public IP for EC2 instance with our container.

After that you can check containers on this instance:

$ docker ps -a

And see container logs:

$ docker logs -f <CONTAINER ID>

Also, you can do regular docker stuff.

Don’t forget to remove SSH permission from instance after debugging!

Configure CircleCI

If everything works fine, we can configure CircleCI to automatically deploy a new version of docker image to the AWS ECS. Add empty circle.yml to the project and connect project on service.

We need AWS credentials to allow CircleCI to push new images to the ECR and update ur ECS Service. It’s highly recommended to create a separate role for that on AWS IAM.

Add AWS credentials to the CircleCI project settings.

Project settings -> Permissions section -> AWS permissions -> add AWS key and secret.

aws cli expect default region for work, so we need to set an environment variable with AWS region in the built environment.

Add default AWS region:

Project settings -> Build Settings section -> Environment Varibles -> add AWS_DEFAULT_REGION variable with your region (us-east-1 in my case)

Now, configure circle.yml to build a new image and push it to the ECR and run deployment script. Example:

Now we need to create deploy.sh. This script should:

  1. Create a new task definition for docker image with $CIRCLE_SHA1 tag
  2. Register it in the cluster Service
  3. Run Service update process

Example:

Don’t forget to replace <YOUR SERVICE NAME> <YOUR CLUSTER NAME> <YOUR TASK FAMILY>with the correct values.

That’s it. Now you have Continues Deployment to the Amazon EC2 Container Service. To more information, please check the Literature section. Hopefully, this will be helpful for someone.

Literature

  1. How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker
  2. Set up a build pipeline with Jenkins and Amazon ECS
  3. Amazon EC2 Container Service Docs

--

--