Continuous Delivery of a Java Spring Boot WebApp with AWS CDK, Codepipeline and Docker

Marc Logemann
May 4, 2020 · 15 min read
Image for post
Image for post

In a perfect world we have a good git flow matching the companies’ needs, which is the base for an automated deployment pipeline, acting on git commits on certain branches and does all the heavy lifting for you like building code, testing code, building a docker image, deploying a docker image and on top of all that, notifies you when something bad happens, or just generates a nicely looking application ready to run if everything’s ok.

This tutorial will get you started with a minimal setup, which is in fact not that minimal, so that you can further implement your own features afterwards while having quick success in the meantime. It will consist of a really simple HelloWorld Webapp written in Java (Spring Boot) which just returns an image and one clickable link) and the infrastructure project in typescript. Along the way we will use the following Services from AWS:

  • ECS, ECR, IAM, Codepipeline, CodeBuild, SNS, VPC, Fargate, Secrets Manager

So lets start…

Image for post
Image for post
Photo by Braden Collum on Unsplash

Install the AWS CDK

Before installing the CDK just a quick reminder of what CDK is. Its a infrastructure as code framework developed by AWS itself. You can code your infrastructure in several languages and being just a project like every other software project, you can version it and have full visibility over the changes which will occur to it. For this tutorial i will use Typescript as my language of choice. Don’t ask me why i provide a Java HelloWorld project and do the infrastructure project in Typescript… ok i will tell you. Because i like Typescript and Java (unbelievable i know — normally you hate one of them when using the other) ;-)

Prerequisites

For installing the CDK, you need to install Node.js >= 10.3.0. Get it here or just use a package manager for your platform like Homebrew or whatever.

After that we just do the install and check the install afterwards:

If you any issue with the installation, head over to the official Getting Started Page at AWS.

Now there are several ways how to defined the credentials and the region where you want to install the resource. If you already have the AWS CLI installed because of other journeys with AWS, CDK picks up the default profile and you are all set.

If you don’t have the AWS CLI installed just make sure you have the following environment variables set in your shell.

  • AWS_ACCESS_KEY_ID – Specifies your access key.
  • AWS_SECRET_ACCESS_KEY – Specifies your secret access key.
  • AWS_DEFAULT_REGION – Specifies your default Region.

If you are on Mac or Linux, select a region of choice like so:

for windows you set environment variables differently of course. The AccessKeyId and the SecretAccessKey can be created in the AWS console in the IAM section. To make things easier (just for this tutorial) give the user admin rights so that you wont get into permission issues. You should narrow permissions as soon as you leave the “tutorial” phase. For more infos on that, consult google search.

Let’s do the real work

Now you normally would create your CDK project from scratch by stepping into a newly created folder and do cdk init but to make things easier for you, i will go a different route by giving you a github repo which you can clone. We will do a walkthrough of the CDK project in there.

So just clone the repository mentioned above and lets see what we have in there. The starting point for normal CDK projects is a typescript file in the bin folder of the repository. In our case thats cont_deploy_docker.ts.

From there you normally instantiate different Stacks. If you have large CDK projects you can structure it pretty nicely with different stacks, but for the sake of simplicity, we keep it all in one stack. (note: this is not a programming tutorial). So let’s look at our starting point which just creates a stack, which in fact is just a class. So lets get to that class now:

Let’s break up those elements and see what they are doing. First off, you see a lot of imports because we touch a lot of AWS CDK Apis. You can see it also when looking at the package.json file where all the needed dependencies are defined.

Let’s create a VPC

Having a controllable logical network is always a good idea in CDK projects, so lets create this first by calling. var vpc = new Vpc(this, ‘my.vpc’, {…}. We just pass the very basic attributes like cidr and number of AvailabilityZones. Good thing is that CDK does all the rest like creating a private and a public subnet, an internet gateway and some more needed entities. Doing this by hand in the AWS console is a nightmare compared to CDK.

Let’s create an ECR Repository

First we need a repository where the docker images are stored in a versioned way. Think of it like a package manager repository but for docker images. Creating is easy is that:

When you would do this by hand in the AWS console, you would need to create more stuff like an ECS cluster but this will be autocreated by a component (ApplicationLoadBalancedFargateService) we will define later on.

Let’s create the codepipeline components

A short intro to Codepipeline. With Codepipeline you can structure a build process in various stages. Then you can add actions to those stages and build a process chain where outputs from one action can act as inputs to another. All this will be visualized in the AWS console quite nicely. Its fun to work with and you can easily dig deeper into logs and things when looking at those stages.

We start by creating a PipelineProject.

We put the construction in a separate method to reduce the complexity in the constructor of the stack. The environment section defines the OS image the buildserver should run on. Since we are going to build a java application, we go with a UBUNTU_14_04_OPEN_JDK_8 image. The privileged flag is quite important to set because otherwise the Docker daemon can’t be run on the build machine. We need him after building our code to create the docker image and push it to the repository.

In the environmentVariables section we can pass env variables to the buildMachine operating system. We will need them later on the shell.

Normally the buildspec definition is a YAML file which is kept in the root of the source repository (more on that later) but i like it to be in the CDK project instead, so we will define it as a JSON structure right in the PipelineProject. Even though its placed in the PipelineProject, the buildspec is integral part of CodeBuild, which is a different AWS service altogether but Codepipeline uses CodeBuild via a CodeBuildAction. We will go into details about the buildspec when we discuss the mentioned Action.

Then you encounter this in the sourcecode:

Again this is added to the pipelineProject. The added policy is in fact needed by the docker commands we will trigger on the buildmachine’s shell defined in the buildspec. So bear with me… we will come to that too. But for the moment, let’s just say that PipelineProject is not the best className for a class which is only responsible for the CodeBuildAction and has nothing to do with the later created Pipeline object.

Get me the source

The first action of a buildsystem is getting the source to work with. With Codepipeline you get plenty of *SourceActions to work with. In our case, the project we want to build resides on Github. So we will choose the GitHubSourceAction via:

Now the thing is. You can’t work on my HelloWorldWebApp repository on Github even though its public and you could clone it without a problem. GithubSourceAction not only checkout the repo for getting it on the buildMachine but also needs an oAuthToken / Personal Access token of the owner to listen to repo changes. And since i cant give you my credentials you need to change the owner property from “logemann” to your Github username and fork my repository at:

The next thing is the oauthToken property. Since we are using AWS Secrets Manager to get the Github Personal Access Token, we need to put the Github token into the Secrets Manager before of course. So first head over to Github, click on your profile image on the upper right, then settings -> developer settings -> Personal Access Tokens and create a new token. Now there are at least two options how to put it into AWS Secrets Manager. If you have the AWS CLI installed, simply do the following:

Or if you don’t have the CLI installed you can of course do it the GUI way at the AWS console. Go to Secrets Manager and add a new key. IMPORTANT: be sure to select “other type of secrets” and “plaintext” like seen in the screen below (really remove the json structure which AWS put as default in there):

Image for post
Image for post

Then in the next screen just enter github/oauth/token as the secret name because that’s what we gonna reference in the CDK project. Leave the rest of the remaining fields at defaults.

With that, the SourceAction should be ready to go to work on your forked repository of HelloWorldWebApp.

Define the build action

The next action we need to create in order to place it in the pipeline later is the CodeBuildAction. Lets look at the code and what it does:

This looks simple because the blueprint of how this action works was already defined via buildspec in the CodepipelineProject. Here we just use the output of the GitHubSourceAction as the input of the CodeBuildAction. Notice: You can also use two or more SourceActions and use the property extraInputs for those extra artifacts which are generated by the other sources. This way you could grab the source from GitHub and add additional Configuration Files from S3 for example.

Now let’s look at our mysterious buildspec definition. The buildspec basically tells the buildmachine’s operting system what to do. It has also phases or steps like “install”, “pre_build”, “build”, “post_build” which are executed sequentially. You can group your shell commands in those steps. Since we defined an UBUNTU base image, we can just use all the ubuntu linux commands. If you need a package, just install it via apt-get.

In our case, we log into docker in the pre_build step with the following command $(aws ecr get-login --no-include-email). Now this command has some pitfalls. If you forget to activate the “privileged” flag mentioned before, you will get a weird Docker error message. If you don’t provide additional policies to the role created by CDK, you will get permissions errors. Therefore the already mentioned code: pipelineProject.role?.addManagedPolicy(..) is pretty important. Basically we extend the auto created Role with a pre-defined AWS policy called AmazonEC2ContainerRegistryPowerUser. As the name implies, the policy adds some permissions with regards to ECR. Without we cant login into ECR and we also cant push images to it.

Now let’s look at the main part of the build, the “build” step.

First we compile the Java Spring Boot example WebApplication with ./gradlew bootJar. Then the docker image will be created with the “docker build” command (and tagged with a long URI from the repo). Then we put more tags on it and push it into the ECR repository via “docker push”. The needed Dockerfile resides also in the Java Example WebApp repository in the folder “docker”. As i have written before, if you want to place the docker project in a separate GIT repository or somewhere completely different like on S3, you can do so by defining two InputSources and the extraInputs property. Note: Watch out that this will result in mutiple folders on the buildmachine. But you always start in the shell in the primary input folder.

Lets take a quick look at the Dockerfile:

It’s quite simple. We use an alpine Linux with JDK as the base image because it’s really compact and nice. Then we install a helper program called tini and copy the just built JAR file into our /bin folder. As you can see, the jar is hardcoded in the dockerfile which will of course not work when bumping the version, but i want to keep it simple, this thing will be complex enough. At the end we just execute the JAR file via the tini helper program. The Spring Web Application exposes the port 8080. This is where the application will run on. This will be important later on.

But back to our buildspec. There is another very importat step we need to do.

The post_build step is the one after the build. Of course it is. I ve written before that codepipeline actions use output from one step in the next one. The build step just needs to output one very specific file, which will be picked up by the deploymentAction later on. This file is called imagedefinitions.json and we will generate it dynamically. Thats what the printf statement is doing. At the end of the buildspec we tell CDK to have this as “output” Artifact via:

Important: This artifact ends in the buildOutput variable we created before and attached to the buildAction with:

Define the deployment

This action will be the one which we will place in the Deploy stage of the pipeline, which we will generate in a second.

Here we have two essential things. First, we use the buildOutput variable (artifact) mentioned in the previous section as input for this stage. As we remember, this artifact is just a json file which holds the name of the docker Image URI and the container name to start.

Next thing is the service we want to use for the ECS deployment. AWS offers two kinds of Docker runtime scenarios. One is based on EC2 instances and one is based on serverless resources called Fargate. It works a bit more like AWS Lambda. So lets see how we construct the FargateService and this one will be quite complex behind the scenes and a bit before the scenes too.

We want that this service use our already defined VPC so we apply that. If we omit the VPC, CDK would create one for us . We also dont need to define the ECS cluster, it will also be created on demand, of course you could also create it before and supply it like with the VPC. Then we tell AWS with how many resources this container should run. The containerName MUST match with the one defined in the imagedefinitions.json in the buildAction. The image can be easily obtained by asking ou ecrRepository object we created before. Then one of the most important properties is containerPort. Our Spring Web Application runs on 8080 and even it would have been easy to change that to 80, we have chosen not to do so. This means we need to tell Fargate on wich port the container wants the requests to be dispatched. Why “dispatched” ? Under the hood this Service, as the name implies, does a lot more than just creating a Fargate Container Instance. It creates various network related components including a LoadBalancer. This LoadBalancer will per default listen to 80 and will then dispatch to 8080 on the container because of the property containerPort.

Attention: We have a workaround in this part. We tell the FargateService to grab a “okaycloud/dummywebserver:latest” image from the Dockerhub registry via :

This is a small node express webserver i provided on Dockerhub which only displays a “Waiting for Codepipeline Docker image” webpage. But why do we need this? On the very first deployment of our stack, we cant reference our “to be build” image from our Codepipeline because it wont be there on startup of the stack. So we can’t code something like fromEcrRepository(ecrRepository, “latest”). This would result in an endless deployment loop on the console because AWS tries to start this Service unlimited times and the deployment will just wait on the console for the successful startup which will never happen.

Another issue is that we need to provide the default TaskExecutionRole which CDK creates, a policy to work with the ECR Repository service. Because with the usage of fromRegistry() in our code, CDK cant know that we later (on every codepipeline run) want to communicate with ECR instead of Dockerhub.

Lets wire all this togehter

Now that we have all the relevant actions defined. Lets put them into the stages of the Pipeline we gonna create.

This looks straightforward right? Just put those referenced objects into the different stages. You can name them like you will but “Source”, “Build” and “Deploy” are somewhat of a defacto standard in Codepipeline. You can of course also put more than one action into one stage, hence the array. And you can create unlimited stages as well.

Lets run that beast

CD into the CDK project you cloned and which you modified at those two places i mentioned regarding the GitHubSourceAction. Instruct node to build the project via: npm run build

If no errors were spit out by the compiler, you need to know the ID of the stack we just coded. This can be done by issuing: cdk ls

Deployment is done via: cdk deploy ContDeployStack

If everything runs fine, and this will really take a while, you will see something like this:

✅ ContDeployStack

This means that the Stack was deployed successfully and guess what, Codepipeline is already working because it sees the Github repository the first time and tries to build it. So let’s head over to the AWS Console to see whats going on. After login, just click on services on the upper left and type “codepipelne”. You will see the overview of all Pipelines and one of them should be “my_pipeline”. After clicking on it you see those 3 stages we defined and the status of the run.

If all three stages are green, this means that the whole process worked out ok and AWS has built the latest revision of the sourcecode and put the resulting docker image in its Elastic Container Service. You can check if the application is running by looking up the DNS name of the LoadBalancer, for this click on Services in the AWS console and type “EC2”. Then you will see a LoadBalancer menu entry on the left side bar. Clicking reveals this:

Image for post
Image for post

Just copy paste the DNS name in the lower area and request that name with your browser. You should see an image with a little boy aiming for the stars.

Image for post
Image for post
Photo by Lance Grandahl on Unsplash

Summary

This tutorial is quite long and hopefully it works out of the box. There are certainly a few things you need to do in order to succeed but all those are mentioned in this blog. But let me just recap what this means if everything is working correctly.

You get a fully automated Continuous Deployment Pipeline from start to finish. A developer just needs to commit his changes to a GIT repo and AWS will do the rest. A few minutes later there will be a new version of the docker container with the latest changes from the source repository up and running. And all this with ZERO downtime of your application.

Of course there are lots of lots of things which are not implemented which are needed like proper version handling (right now there is some very minimal versioning based on GIT commit hashes in the ECR Repo). All in all there is a lot missing. Furthermore there are no email or slack Notifications currently. This could be done super easily but this tutorial needs to end at some point.

I hope this CDK project gives you a blueprint how to assemble your pipeline or at least some ideas or code blocks for your journey.

If you want to know more about well architected cloud applications on Amazon AWS, how to prototype a complete SaaS application or starting your next mobile app with Flutter, feel free to head over to https://okaycloud.de for more infos or reach me at the usual places in the internet.

AWS Factory

Tutorials, Examples and Ideas around Amazon AWS

Marc Logemann

Written by

Entrepreneur and CEO of logentis.de and okaycloud.de, (AWS) Software Architect, likes Typescript, Java and Flutter, located in the Cloud, Berlin and Osnabrück.

AWS Factory

Tutorials, Examples and Ideas around Amazon AWS

Marc Logemann

Written by

Entrepreneur and CEO of logentis.de and okaycloud.de, (AWS) Software Architect, likes Typescript, Java and Flutter, located in the Cloud, Berlin and Osnabrück.

AWS Factory

Tutorials, Examples and Ideas around Amazon AWS

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store