A Complete Spring Boot Microservice Build Pipeline using GitLab, AWS and Docker — Part 1.

Elabor8 Insights
10 min readDec 12, 2019

--

Alan Mangroo — Senior Engineering Consultant

Introduction

If you have ever had to build a microservice or a Continuous Integration / Continuous Delivery (CI/CD) pipeline you have probably spent hours researching and piecing together snippets of code and configuration to build your application and CI pipeline. This blog presents a complete solution that you can fork and use as a basis for your own microservice developments.

The project uses GitLab CI to build and deploy the application, therefore you will need to fork it into your own GitLab account. If you use a different Git hosting service you will not benefit from the GitLab CI pipeline that is described later on in this blog, so create yourself a GitLab account and lets get started…

First thing to do is fork the project from GitLab and clone it to your machine:

https://gitlab.com/alan.mangroo/gitlabcidemo

This blog is split into two parts. Part One provides an overview of the project and will concentrate on building and running locally. We will also look at how you can use GitLab CI to create a CI/CD pipeline to build and deploy the application.

Part Two provides the detailed steps required to get the whole pipeline deployed using GitLab CI and AWS. If you just want to get the pipeline running then head straight over. If you want to find out how the application and pipeline work then carry on reading…..

The microservice is built using the following widely used tools and a basic understanding of them will aid following this article.

  • Java 8 and Spring Boot
  • Maven
  • Cucumber
  • Docker
  • GitLab CI
  • DynamoDB
  • AWS ECS

If you need more details on any particular aspect of the project please post a comment and I will provide more detail where requested.

The Spring Boot microservice we are going to use is a very simple and contrived example. It is a basic temperature logger service that exposes a REST API. It allows you to POST a temperature reading that will be stored in an AWS DynamoDB table. This is all we need to demonstrate the CI/CD pipeline. When you fork the GitLab repository you are then free to modify the service to do whatever you wish.

Now that you have forked the repository we will take a look at how to test and run it locally.

Testing

The application has unit and integration tests. You select which kind of test to run by invoking the appropriate Maven profile. The tests are basic examples of unit and integration tests, the point here is to show how Maven Profiles and plugins can be used to run different kinds of tests. Using profiles allows faster running unit tests to run first and provide us with faster feedback. Integration tests are often slower so we can avoid running these until all unit tests pass. The unit tests have no dependencies, however the integration tests depend on a local DynamoDB which we will run using Docker.

The two profiles are described in the Maven pom.xml file. The key thing to note is the various skip properties that are used to configure exactly what tests each profile will run. For example Dynamo DB is only started by the integration-test profile and not the unit-test profile.

<profiles>
<! — The Configuration of the unit profile. This is the default
profile -->
<profile>
<id>unit-test</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<build.profile.id>unit-test</build.profile.id
<skip.integration.tests>true</skip.integration.tests
<skip.unit.tests>false</skip.unit.tests
<skip.startlocaldynamo>true</skip.startlocaldynamo>
</properties>
</profile>
<! — The Configuration of the integration-test profile -->
<profile>
<id>integration-test</id>
<properties>
<build.profile.id>integration-test</build.profile.id
<skip.integration.tests>false</skip.integration.tests
<skip.unit.tests>true</skip.unit.tests
<skip.startlocaldynamo>false</skip.startlocaldynamo>
</properties>
</profile>
</profiles>

Notice that the integration test profile does not run the unit tests. This saves the unit test from being run twice in the CI/CD pipeline.

Run Unit Tests

The following command will run the unit tests using the maven-surefire-plugin plugin. This plugin will run all tests classes named *Test.java

mvn clean test -P unit-test

The maven-surefire-plugin is configured as follows:

<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M2</version>
<configuration>
<skipTests>${skip.unit.tests}</skipTests>
</configuration>
</plugin>

You can see the skip.unit.tests property is used to decide whether these tests should be run or not.

Run Integration Tests

In order to run the integration tests locally you need to have Docker installed as the build will run a local DynamoDB in a Docker container.

The following command will run the integration tests using the maven-failsafe-plugin plugin. This plugin will run all tests classes named *IT.java

mvn clean install -P integration-test

The maven-failsafe-plugin is configured as follows:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>3.0.0-M2</version>
<configuration>
<skipTests>${skip.integration.tests}</skipTests>
</configuration>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>

Again you can see the skip.integration.tests property is used to decide if this plugin should fire or not.

The integration tests will attempt to connect to a DynamoDB instance to test the repository classes. Rather than trying to connect to AWS and use a real DynamoDB instance we use Docker to run a local DynamoDB instance for us. The docker-maven-plugin takes care of starting and stopping the DynamoDB Docker container. Have a look at the configuration of the plugin below. The configuration specifies a Docker container that is started and stopped before and after the integration-tests phase. The configuration also specifies the port mappings and a wait time. The wait time is included to ensure that the Docker container has started before the integration tests are run.

<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<images>

</images>
<skip>${skip.startlocaldynamo}</skip>
</configuration>
<executions>
<execution>
<id>docker:start</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>docker:stop</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>

When this profile is run on Gitlab CI we want to connect to a real AWS DynamoDB instance rather than a Docker container, to do this we pass the following flag to Maven -Dskip.startlocaldynamo=true. We will cover this in more detail when looking at the GitLab CI configuration.

Once this Maven build completes the resulting application Jar file will be in the /target directory.

Run stand alone application locally

Now that you have run the automated tests and built the application Jar file you can run the microservice. Once running locally you can access the service using a client tool such as Postman or Insomnia.

There are different ways that you may want to run the complete service from your local development machine. The project provides 3 example ways of running locally that we will look at next. The main difference between each example is how we connect to DynamoDB. Take a look at TemperatureConfiguration.java to see the code that sets up the DynamoDB client for each of the scenarios below.

Run locally using a local DynamoDB

The simplest way to run the application is to run it locally and connect to local DynamoDB. In this example we are using Docker to run the local DynamoDB. Using Docker saves us having to install and configure the local DynamoDB instance ourselves.

First of all you need to start the local DynamoDB Docker Container as follows:

docker run -p 8000:8000 amazon/dynamodb-local

This exposes DynamoDB on port 8000. Once running, the Spring Boot application can connect to http://localhost:8000 to access the local DynamoDB instance.

Next start the application using the LOCAL profile.

java -jar -Dspring.profiles.active=LOCAL target/temperature-service-1.0-SNAPSHOT.jar

The application should start and the version can be found by going to http://localhost:8080/application/version using a browser.

The file application-LOCAL.yml contains the configuration for this profile. You will see that that amazon.dynamodb.endpoint is set to localhost:8000.

Run locally using an AWS DynamoDB instance

The next option is to run the Spring Boot application locally, but connect it to the real DynamoDB in AWS.

In order to connect your local running service to AWS DynamoDB you will need to provide your AWS AccessKey and SecretKey. These keys allow the Spring Boot application to use DynamoDB and CloudWatch. Get these keys from the IAM dashboard in the AWS Console.

Edit the application-LOCAL-AWS.yml file and update it with your own keys. Then rebuild the application using

mvn clean install -P integration-test

Now run the service using:

java -jar -Dspring.profiles.active=LOCAL-AWS target/temperature-service-1.0-SNAPSHOT.jar

The service should now start and connect to AWS. This option is fine for running locally while developing, but in production we do not want to store our access keys in configuration files. The next method of running the service avoids this.

Run in AWS EC2 and connect to AWS DynamoDB

The final option provided allows you to run the service in an AWS EC2 instance as you may do in production. This could be by simply copying the Jar file to that location, or using ECS to run the service as a Docker container. The next blog in this series explains how to deploy to ECS in more detail, so let’s look at how to run it without Docker.

When running in an EC2 instance there is no need to provide your AWS Access and Secret key in config files if we use an IAM Role to grant appropriate permissions to the EC2 instance. Do this by creating an IAM Role that has the policies to use the services you require. In this case you need the AmazonDynamoDBFullAccess policy.

Then assign this role to the EC2 instance you want to run the service on.

Copy the JAR file to your EC2 instance. Now run the service using the following command on the EC2.

java -jar -Dspring.profiles.active=PRD target/temperature-service-1.0-SNAPSHOT.jar

The application now has access to DynamoDB (or any AWS resources you wish to use) without needing the access keys.

Accessing the application using Postman

Whichever method you choose to run the application you will want to try making calls to it. This can easily be done using Postman.

Try making a GET request to http://<HOSTNAME>:8080/application/version. This should return the application version number set in ApplicationController.java.

Log some data by POSTing a JSON request to http://<HOSTNAME>:8080/temperatureReading/temperatures. This should return a JSON response and HTTP 201 CREATED.

GitLab CI

Now that we have a working local environment we can look at how to use GitLabCI to build and deploy the application on every commit. GitLab CI allows you to create your own CI/CD pipeline without the need to install or manage a build server such as Jenkins.

The complete GitLab CI pipeline is specified in .gitlab-ci.yml. This file is included in the root directory of the same repository as our application code and therefore benefits from version control.

The presence of the .gitlab-ci.yml file will cause GitLab CI to automatically execute the pipeline after a commit. You don’t need to tell GitLab CI to use it.

A GitLab CI pipeline consists of Stages and Jobs. Each Job belongs to a Stage. Stages run sequentially. Jobs within a Stage can run in parallel if there is more than one Job for the Stage.

Our pipeline consists of 4 Stages configured as follows:

stages:
- unit-test
- integration-test
- package
- deploy

Head over to the CI/CD page in Gitlab and click Run Pipeline.

If you have just forked the repository then the package and deploy stages will fail as there is some GitLab and AWS setup required. This setup is described in detail in the second part of this blog.

Let us take a look at the first two stages of our pipeline as they are related to the build and test of our application.

We will look at the final two stages as they are related to the deployment of our application in blog Part Two.

Unit-test

maven-unit-test:
image: maven:3-jdk-8
stage: unit-test
script:
- mvn $MAVEN_CLI_OPTS clean test -P unit-test

This simple job runs during the unit-test stage. It executes a single maven command that we looked at earlier in this blog. It uses a pre-built Docker image that contains the build environment to use. In this case it contains JDK 8 and Maven.

Integration-test

maven-integration-test:
image: maven:3-jdk-8
services:
- name: amazon/dynamodb-local
alias: dynamodblocal
stage: integration-test
script:
- mvn $MAVEN_CLI_OPTS install -P integration-test -Dspring.profiles.active=CI -Dskip.startlocaldynamo=true
- mkdir target/dependency
- (cd target/dependency; jar -xf ../*.jar)
artifacts:
paths:
- target/*.jar
- target/dependency
- target/cucumber-reports/cucumber-html-reports/*

This Job is responsible for running the integration tests that we looked at earlier. When we ran these tests locally we used Maven to start and stop the Local DynamoDB Docker container. When running this task in GitLab CI we do not need Maven to start and stop the container. We do this by specifying -Dskip.startlocaldynamo=true.

We can let GitLab CI manage the Docker container by specifying a service in the job. The container name is specified in the services.name property. The services.alias property specifies the hostname for the container. This allows the Spring Boot application to access the container using the following endpoint: http://dynamodblocal:8000/ specified in the application-CI.yml configuration file.

When this Job runs the DynamoDB Docker container will be started and stopped by GitLab CI. Some artefacts are also specified in this Job. These files are saved and made available to subsequent Jobs.

Well done for making it this far. Hopefully this blog has given you an understanding of how we can build and run our microservice locally and also use GitLab to build a CI/CD pipeline.

There are two more jobs in the .gitlab-ci.yaml file. These perform the following tasks related to deploying the application:

  1. Build a Docker image of our Spring Boot application and push it to the AWS Docker Registry known as ECR.
  2. Run the Docker image using AWS ECS

These two jobs are described in more detail in the next blog. Continue to Part Two to resume your journey towards a full automated CI/CD pipeline.

--

--

Elabor8 Insights

These are the personal thoughts and opinions of our consulting team, uncut. For company endorsed content visit www.elabor8.com.au/blog