Deploying Your App With ECS & Gitlab CI/CD

Abhijeet De Sarkar
The Startup
Published in
7 min readSep 8, 2020

For the past few days I have been building a new micro-service at my workplace. Our team decided to automate the deployment process so that whatever we are working on can be used by other teams and the feedback loop would be a lot shorter.

Our goal is to deploy code whenever a Pull Request is merged to the master branch. I will be using AWS ECS and Gitlab CI/CD to solve this.

Understanding ECS

https://aws.amazon.com/ecs/

Amazon Elastic Container Service (ECS) is a container management service, which allows us to run our docker containers directly on managed clusters of amazon EC2 instances or the serverless compute engine Fargate.

ECS eliminates the need for us manually operating the cluster, saving us from a lot of headaches. Complex tasks like scaling servers up and down can be done in just a few clicks.

ECS uses ECR(Elastic Container Registry) to store our docker images. We can specify whichever image we want to use in our application.

Deploying on ECS

Let’s dive into this. We will need an app to deploy our code….duh.

Here’s a basic app for us to work with https://gitlab.com/iamads/my-ecs-demo. It is a basic server with a status api, which says the service is running.

Assuming you are all set with the app. Let’s check our Dockerfile.

All it does here is takes a node image, copies our code into the work directory. After that it installs all the required packages. It exposes port 8080, because this is the port on which the server is running. We expose this port so that requests from outside can reach the server. Finally we run the app `node index.js`

Building the docker image and pushing it to ECR

On AWS go to ECR and click on create repository and, just name the repository and click on `Create repository`. This will create a new repository for you. The link of the url should looks something like this:

<AWS ACCOUNT ID>.dkr.ecr.eu-central-1.amazonaws.com/<ECS REPOSITORY NAME>.

From now on we will call it <REPOSITORY_URL>.

Before we push our image to ECR, we will need to install aws-cli. After installing it run: aws configure . It will ask you for access_key, secret and region.

This will login us to AWS ECR. Now we can build our docker image and tag it with the <REPOSITORY_URL> and push it. We can find this image in Amazon ECR -> Repositories.

Before we get into it, we will have to create a security group for our ECS

aws ec2 create-security-group — group-name my-ecs-sg --description my-ecs-sg

Now let’s deploy this app on ECS, for that we will have to create a cluster first.

Create Cluster

In the AWS dashboard go to ECS and select clusters. Then click on create clusters.

ECS cluster template

Here select EC2 Linux + Networking cluster template. Then go to next step.

Configure Cluster 1

I used the above options to create a cluster.

Configure Cluster 2

Use the default VPC and subnets here. We selected the security group that I created beforehand. Finally click on create cluster and your cluster would be created.

Cluster Created :)

Create Task Definition

We will now create a task definition, go to task definitions page and click on Create new Task Definition.

Fargate VS EC2

We will go with EC2 .Click on next step.

Task Definition 1

Name the Task Definition.

Task Definition 2

Add task memory and the task cpu (it should be greater than or equal to 128 for both). Click on Add container.

Add container 1

Add container name and the image url (i.e. <REPOSITORY_URL>). Add the port mappings, here we map host port(ec2) to the container’s. Any request coming to host port would be passed on the container port.

Add container 2 (Environment Variables)

On scrolling down, we’ll find the environment variables section. We can add any of the ENV variables needed by the app here. For our example this is not needed.

Creating a Service

We select our cluster. On the Services tab, click on CREATE.

Create Service

Launch type would be EC2, select the correct task definition and cluster. Name the service. Service type would be REPLICA. Set the number of tasks, I will go with 1, increase it if you will have more load.

Going further, we will use no load balancer and disable autoscaling.Finally, create service.

We can also setup ELB for this service. For more info: https://medium.com/boltops/gentle-introduction-to-how-aws-ecs-works-with-example-tutorial-cea3d27ce63d

To check if our service is running. I will get the IP of the EC2 instance and do a GET request on <ip>:8080/status

Demo Service

If you get unreachable error, you will have to allow inbound traffic to ECS. You will need to edit the inbound rules for ECS security group.

This means allow inbound TCP traffic to port 8080 from anywhere.

Integrating With Gitlab CI/CD

Note: You will need your role to be either Maintainer or Owner

Go to settings -> CI/CD and then click on variables.

CI/CD variables

Add these variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION. As a best practice AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY should only give programmatic access.

On adding these fields we can also decide if we protect or mask a variable. A protected variable will only be available for protected branches. This would be useful if we want to deploy to production when code is merged to master branch. On the other hand marking a variable as masked means its value will not show up in the logs.

Now add .gitlab-ci.yml to your repository and add the following

Let’s go through it bit by bit

image: docker:19.03.10services:
— docker:dind

This is needed to build docker based builds. You can find more info here.

variables:
REPOSITORY_URL: <REPOSITORY_URL>
TASK_DEFINITION_NAME: <TASK_DEFINITION>
CLUSTER_NAME: <CLUSTER_NAME>
SERVICE_NAME: <SERVICE_NAME>

Here we populate these variables with the corresponding values from the previous section.

before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set region $AWS_DEFAULT_REGION
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"

Here it installs requirements jq, python, pip and aws-cli. Then we configure the aws-cli so that it can connect to our AWS account. Next we login to ECR. And finally we set the IMAGE_TAG variable.

stages:
- build
- deploy

Here we define the stages of our pipeline. Right now, we only have the build and deploy stage, but we can easily add steps for unit testing, integration tests etc.

build:
stage: build
script:
- echo "Building image..."
- docker build -t $REPOSITORY_URL:latest .
- echo "Tagging image..."
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
- echo "Pushing image..."
- docker push $REPOSITORY_URL:latest
- docker push $REPOSITORY_URL:$IMAGE_TAG
only:
- master

Here we build the image, then tag the image with both latest and IMAGE_TAG. We do this so that if something goes bad, we can rollback to stable version. Finally we push it. And all this happens when we merge/commit on master branch.

deploy:
stage: deploy
script:
- echo $REPOSITORY_URL:$IMAGE_TAG
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${AWS_DEFAULT_REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$REPOSITORY_URL:$IMAGE_TAG" '.taskDefinition.containerDefinitions[0].image = $IMAGE | .taskDefinition.containerDefinitions[0]')
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${AWS_DEFAULT_REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
- echo "Updating the service..."
- aws ecs update-service --region "${AWS_DEFAULT_REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
only:
- master

And finally the deploy stage runs. It looks scary but if we look carefully we’ll see it just registers the new container definition and updates the service, whenever code is merged/committed to master branch.

And we are done. As soon as we merge/commit code to master branch it will trigger a build.

Here are some links which I found really useful:

https://medium.com/boltops/gentle-introduction-to-how-aws-ecs-works-with-example-tutorial-cea3d27ce63dhttps://medium.com/@Elabor8/a-complete-spring-boot-microservice-build-pipeline-using-gitlab-aws-and-docker-part-2-984c7107ceadhttps://gist.github.com/jlis/4bc528041b9661ae6594c63cd2ef673c

--

--

Abhijeet De Sarkar
The Startup

I help experienced professionals build a knowledge side-hustle. Founder @ Hyperlearn, EM @ theButterApp