A Complete Spring Boot Microservice Build Pipeline using GitLab, AWS and Docker — Part 2.

Elabor8 Insights
10 min readDec 12, 2019

--

Alan Mangroo — Senior Engineering Consultant

Introduction

Hopefully you followed Part 1 of this blog that explained how to build and run a Spring Boot Microservice application locally and also introduced GitLab CI. In Part Two we will look in detail at the remaining two jobs in the .gitlab-ci.yml file and then look at how to set up AWS ECS to complete the CI/CD pipeline.

The final two stages in our CI/CD pipeline are concerned with packaging and deploying the application.

build-image

build_image:
image: docker:latest
stage: package
services:
- name: docker:dind
script:
- apk add — no-cache curl jq python py-pip
- pip install awscli
- $(aws ecr get-login — no-include-email — region ap-southeast-2)
- docker build -t $REPOSITORY_URL:latest .
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
- docker push $REPOSITORY_URL:latest
- docker push $REPOSITORY_URL:$IMAGE_TAG

This job is responsible for building a Docker image of our Spring Boot application and pushing it to the AWS Elastic Container Registry (ECR).

The job will build the container image using a basic Dockerfile specified in the root directory of the project. Once built the image is tagged and pushed to AWS ECR.

Before pushing the image to ECR GitLabCI needs to authenticate with AWS ECR. The credentials for this are specified as environment variables of the pipeline. We will look at the required steps to authenticate in more detail later in this blog.

This job also installs the AWS CLI into the docker:latest container using Pip so that it can access AWS ECR. An improvement would be to build our own Docker image based on docker:latest that has the AWS CLI installed. This would save having to do the AWS CLI installation during every build.

Deploy

deploy:
image: python:latest
stage: deploy
script:
- pip install awscli
- echo $REPOSITORY_URL:$IMAGE_TAG
- aws ecs register-task-definition — region ap-southeast-2 — family MyTaskDef — cli-input-json file://aws/ecs-task-definition.json
- aws ecs update-service — region ap-southeast-2 — cluster MyCluster — service MyService — task-definition MyTaskDef

Now that we have pushed a new Docker image to AWS ECR we can deploy it using the Elastic Container Service (ECS). The deploy job is responsible for deploying the to ECS. This is done by first updating the ECS Task Definition. The Task Definition is configured by aws/ecs-task-definition.json. This file specifies the details on which image to deploy and also the Spring profile to use when running the container. Also contained in this file is the configuration that enables our application to log to Cloudwatch.

The final command updates the running service to use the new version of the Task Definition. The result of this is that ECS will deploy the new image.

There is some AWS setup required before the Deploy job will work in your GitLab account, this is covered next.

AWS Configuration

In the previous blog we walked through how to build and test our microservice. Our GitLab pipeline builds a Docker Image of our Spring Boot Microservice and now we can deploy it to AWS.

We are going to use AWS Elastic Container Service (ECS) to deploy the Docker container. There are several manual steps required to do this until I write a cloudformation script to automate this process.

Login to your AWS Console and lets get started……

Create an IAM User

First thing we need is an IAM user that has permission to work with ECS and ECR. The new IAM user will need the following permissions:

  1. Navigate to the IAM Console
  2. On the IAM console go to Users and click Add User
  3. Enter a name for the user, eg GitLabUser
  4. Check the “Programmatic Access” box so that access ID and secret are generated for the user.
  5. Click Next to move on to permissions.
  6. Click “Attach Existing Policies Directly”
  7. In the search box enter “AmazonEC2ContainerRegistryFullAccess”. This should leave you with one policy. Select it by checking the box.
  8. In the search box enter “AmazonEC2ContainerServiceFullAccess”. This should leave you with one policy. Select it by checking the box.
  9. Click Next. Optionally enter any tags you wish, then click Next again.
  10. Finally review and click Create User.
Create an IAM User

On the next page copy the Access Key and Secret Access Key to a safe place. You will need these later on. Note: this is the only time you will ever see the Secret Access Key. If you lose this one then you will need to regenerate the keys.

Now you have a new AWS user that can upload new Docker images and update an ECS cluster. In the next section we will update GitLab CI to use this user to deploy to AWS.

Add AWS Keys to GitLab CI

The GitLab pipeline needs to be able to run AWS CLI commands in order to push new images and update the ECS cluster. We do not want to store the keys in our file repository so instead we use environment variables to store them. To enable this we need to update GitLab CI with the AWS access keys that were generated when the user was created.

  1. In GitLab the project navigate to Settings -> CI / CD in the left hand panel.
  2. Scroll to Variables and expand them.
  3. Create two new variables that contain the AWS key and secret that were generated when your user was created.
  4. AWS_ACCESS_KEY_ID
  5. AWS_SECRET_ACCESS_KEY
  6. Click Save Variables
Add AWS Keys to GitLab CI

Create AWS Container Registry

Next we need to create an AWS Elastic Container Registry. This is where our Docker images will be pushed to once they are built by the GitLab CI Pipeline.

  1. Ensure you are in your desired AWS region. My default region is ap-southeast-2 (Sydney)
  2. On the AWS Console navigate to ECR and create a new repository.
  3. Enter a repository name of your choice and click Create Repository.
  4. A new repository will be created and the URI will be displayed.
Create AWS Container Registry - 1
Create AWS Container Registry — 2

Update CI configuration with ECR URI

We now need to update the project with the newly generated ECR URI.

  1. Copy the URI of the registry from the ECR console.
  2. In your fork of the project update the following files by replacing the existing URL with your new URL
  3. .gitlab-ci.yml update the REPOSITORY_URL and any references to the AWS Region.
  4. aws/ecs-task-definition.json update the image keeping the “:latest” portion of the URI and any references to the AWS Region.
  5. Commit and push the updated files to your repository.

The commit will trigger a run of the GitLab pipeline, everything up to and including the “build_image” job should run successfully. This will take several minutes. The result of this is a new Docker image that gets pushed to AWS ECR. The final Deploy job fails because there is no ECS cluster or service to deploy to. We will create these next.

Update configuration with ECR URI — 1

You should now be able to see the image listed in ECR.

Update configuration with ECR URI — 2

Create an AWS ECS Cluster

Once we have an image in AWS ECR we can deploy this using ECS. To do this we must create an ECS cluster, and service. If you need an introduction to ECS then please read this article.

  1. Navigate to ECS on the AWS Console
  2. Click on Clusters from the left hand panel
  3. Click Create Cluster
  4. Click on “EC2 Linux + Networking”
  5. Click Next Step
  6. Enter a cluster name of “MyCluster” (Stick with this name otherwise you need to update the name in .gitlab-ci-yaml and ecs-task-definition.json)
  7. Select the EC2 Instance type eg t2.micro
  8. Enter 1 for the Number of instances
  9. Click Create

AWS will now create an ECS Cluster. This can take a few minutes as a new EC2 instance needs to be launched.

Create AWS ECS Cluster

Update the ECS Role to allow access to DynamoDB and CloudWatch

Before moving on to create an ECS Service we need to update the ECS role that was just created. Our Spring Boot Microservice writes to a DynamoDB table. In order to do this we need to give the service permission to access DynamoDB. This can be done by providing access keys and secrets in the application configuration. However we want to avoid this and use an AWS Role instead. We can update the Role assigned to our EC2 to give it permission to access DynamoDB.

  1. Navigate to IAM -> Roles
  2. Enter “ecsInstanceRole” in the search box then click on the role.
  3. In the permissions tab click Attach Policies
  4. Type DynamoDB into the search box
  5. Check the AmazonDynamoDBFullAccess policy to select it
  6. Click Attach Policy to update the Role with the policy
  7. Next search for CloudWatchAgentAdminPolicy
  8. Check this policy and click Attach Policy

As this role is already assigned to our EC2 instance it will now be able to access DynamoDB and CloudWatch.

Create an ECS Service

Next we need to create an ECS Service that will run on the cluster we created.

  1. Navigate back to the ECS console page
  2. Open your cluster and click Create on the Services tab.
  3. Select EC2 as the Launch Type
  4. The Task Definition dropdown should list your new task definition.
  5. Select the latest version from the revision dropdown.
  6. Enter a service name of “MyService” (Stick with this name otherwise you need to update the name in .gitlab-ci-yaml and ecs-task-definition.json)
  7. Enter 1 as the Number of Tasks
  8. Enter 0 in the Minimum Healthy Percent
  9. Click Next Step
  10. Uncheck Enable service discovery integration
  11. Click Next Step
  12. Click Next Step
  13. Click Create Service

AWS ECS will now create the service. Click View Service to see it.

Create an ECS Service

The cluster will now update and attempt to run the service. In the Events tab you should eventually see a message saying that the service has reached a steady state.

ECS Service Status
ECS Service Status

View the Spring Boot logs

Our Spring Boot service uses CloudWatch for logging. You will be able to view the logs on the AWS Console without the need to login to the EC2 and tail the log files.

  1. Navigate to Cloudwatch
  2. Click on Logs on the left hand panel.
  3. You should see a new LogGroup listed that includes your task definition in its name.
View the Spring Boot logs

If you open up the LogGroup and then look at the latest Log Stream you should see the Spring Boot output. The last few log messages should show that the DynamoDB table has been created.

Open Port 8080 for access

Our service should now be up and running. Grab the IP address of the EC2 instance from the EC2 dashboard.

Open Port 8080 for access

Now try and access the service using the following URL replacing the IP address with yours eg….

http://<EC2 IP Address>:8080/application/version

This request will fail as the AWS Security Group is blocking access to port 8080 of EC2 instance. We need to update the EC2 Security Group to open up port 8080 to allow us access to the service.

  1. Open the EC2 Dashboard
  2. In the lower half of the window click the Security Group
Update EC2 Security Group
  1. In the Inbound tab click Edit and add a new inbound rule for port 8080. You can restrict access to your own IP address if you want to prevent anyone accessing it.
  2. Save the changes
View inbound rules
Edit inbound rules

Now you should be able to access your service using the IP address of the EC2 container and port 8080

http://<EC2 IP Address>:8080/application/version

You will also be able to POST temperature readings using Postman or another REST client. If your POST succeeds then a new item will be created in DynamoDB.

Continuous Deployment

Now you can try to make an update and see the application get automatically built, tested and deployed. Try updating the version number in ApplicationController.java and commit the file. Now look at your CI/CD pipeline in GitLab, it should now run to completion and deploy the new version of your service into AWS.

Summary

That’s it, you now have a complete CI/CD pipeline that will build and deploy a Spring Boot Microservice after a commit to a GitLab repository. It was a long journey and hopefully this tutorial will help you kickstart your own Microservice projects.

Don’t forget to delete your ECS Cluster to remove any running EC2 instances. Failing to do this may result in AWS charges!

Thanks for reading and please leave a comment or some claps if this was useful to you.

--

--

Elabor8 Insights

These are the personal thoughts and opinions of our consulting team, uncut. For company endorsed content visit www.elabor8.com.au/blog