Kubernetes: Deploying Angular + Spring Boot Application in Google Kubernetes Engine (GKE)

Raghavendra Bhat
11 min readNov 20, 2019

--

Recently Kubernetes has gained great popularity and has now become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments.

In this article we will be looking at building a simple contacts demo web application using Angular 8 front end and JAVA Spring Boot backend with mangoDB as a storage. We will containerize this app with help of docker and deploy to Google Kubernetes cluster.

This is the simple demo application demonstrating basic crud operations i.e create new contact, list all the contacts, update contact and delete contact.

The reason for tech stack of angular, Spring Boot and mangoDB choosen here is just because they are widely popular and many applications in the real world are currently built using these tech stacks.So this can be a starter project for someone who is building enterprise applications. ou can checkout the complete code from the below github repository( With Okta enabled) or without Okta enabled.

Where are we deploying?

We are going to deploy this web application to Google Kubernetes engine. For deploying to Azure Kubernetes cluster please refer Kubernetes : Deploying Angular + Spring Boot Application in Microsoft Azure Cloud.

At the end we will have an application in google cloud with below architecture

ContactsApp Kubernetes architecture

This article assumes that you have basic knowledge about the cloud platform.

Here are few good articles/video links to understand what Kubernetes is and what it offers to help develop true cloud native apps.

https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes

https://medium.com/containermind/a-beginners-guide-to-kubernetes-7e8ca56420b6

of course you can refer official Kubernetes by google

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Here is the simple diagram showing the various phases involved to achieve our end goal.

Overall process steps involved

If you are familiar with developing applications in angular + springboot you can skip the development section and jump straight to continuous integration section.

Development :

There are 2 parts of articles which cover the development of this contact application in great detail.

Part 1 : Spring Boot and Angular web application walks you through the development of base contact application

Part 2: Securing Angular + Spring Boot Application with Okta walks you through securing the base contact application with Okta.

Alternatively you can grab the code for part 1 & part 2 from GitHub.

You can choose just the base app for deploying in to the cloud and implement different security models to secure your application in cloud or you can choose same set up what I have currently. In both cases, Deployment process covered in this article is same for both versions.

Dockerizing Contacts Application

Dockerizing our application involves building our application docker images.

Create contacts backend docker image:

Here is the sample docker file to create image from our Spring Boot application once we build our application via maven/gradle build.

you can refer docker documentation to understand the structure of docker file.

FROM openjdk:8-jdk-alpineVOLUME /tmp# ARG JAR_FILECOPY ./target/contacts-backend.jar app.jarENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

Create frontend docker image

The docker file used to build angular frontend application is below

FROM node:alpine as builderWORKDIR '/app'COPY ./package.json ./RUN npm installCOPY . .RUN npm run buildFROM nginxEXPOSE 4200COPY ./nginx/default.conf /etc/nginx/conf.d/default.confCOPY --from=builder /app/dist /usr/share/nginx/html

In this case we first use node alpine image to build our front end application and then use nginx image to build our final frontend image, 2 step process.

We are using nginx as a base image because we need a web server to serve HTML/Javascript requested by users and nginx exactly does that. You can refer details about nginx nicely explained in below articles or official nginx documentation

https://kinsta.com/knowledgebase/what-is-nginx/

https://medium.com/javarevisited/nginx-better-and-faster-web-server-technology-72ce5ad6305a

we use below nginx config file to support our angular routing in

default.conf file

server {listen 4200;location / {root /usr/share/nginx/html/contactsApp;index index.html index.htm;try_files $uri $uri/ /index.html =404;}}

Version Control:

We are using GIT source control to develop and checking in the code.

Continuous Integration:

As we want to support our application to have continuous integration and delivery to Kubernetes cluster, we are using Travis CI to support this.

You can refer below article also on travis CI in detail.

The steps involved to support building and deploying our application are below

Travis CI Integration steps

Here is the corresponding .travis.yml and deploy.sh file.

Few key points to be considered

We are using Travis CI to build and deploy our application to Google Cloud Kubernetes engine

Why google cloud : Its free to start with and you get $300 initial credit to experiment on. also Kubernetes is created by google so GKE has easy built in support to deploy containerized applications in Kubernetes cluster

prerequisite

1) Create a google cloud account and set up project and link your billing account

Please note that you need credit card to set up to create Kubernetes cluster

2) Once project is set up, we need to create Kubernetes cluster.

To create a cluster navigate to below link

Then select create cluster

It will take few minutes to create a cluster, wait for the completion of the cluster creation

3) Once the cluster is created, create a service account i.e create a service account under the IAM section with the name travis-deployer to allow travis to access this Kubernetes engine and create deployment objects

Travis-deployer service account in our case is the cluster engine admin to perform all the operations within the cluster.

Please provide Kubernetes engine admin role.

Generate the service-account.json file and download it to local machine.

Note : This service account file is needed to be encrypted and stored in travis CI, which travis CI accesses and decrypts to authenticate with google Kubernetes engine to access our cluster.

PLEASE DO NOT CHECKIN THE SERVICE ACCOUNT TO GITHUB

4) Encrypt service account using travis CLI

Easiest option to encrypt travis CI is to install travis CI on running ruby image and encrypt our service account file.

Travis CI requires ruby to be installed and docker image is perfectly suitable for this.

Assuming you have docker installed locally, if not please refer below article to install docker

https://docs.docker.com/install/

Once the docker is installed, in command prompt or in terminal CD to the directory where service-account.json file is present and run below command.

docker run -it -v $(pwd):/app ruby:2.3 sh

Here we are mounting the directory called app inside the running container and map this to current working directory so that container can access our directory to encrypt our service-account.json file. Also we are starting the shell in the running container to install travis.

Use below command to install travis

gem install travis

once travis is installed, login to travis with your git hub account using

travis login

once successfully logged in, you can run travis encrypt command to encrypt service-account.json file and store it in travis CI connected to our project. Command for this is

travis encrypt-file <filename> -r <fullname of the repository to store the file including your github user id>

Delete the original file from the project directory and checkin the encrypted file to GIT .

5) Configure docker ID and password in travis CI to push the images to docker hub

Go to the project settings in travis CI and configure docker username and password

With all these prerequisites set up we are now ready to run travis CI builds and deploy our application to Kubernetes cluster.

Now any code checkin in to github will trigger travis build and deploys the application in to GKE.

You can now verify the Kubernetes pods and services created under AKS using kubectl get commands. You can refer all kubectl commands in the kubectl cheat sheet.

Our application still has few configurations missing.

  1. Our contacts-backend needs to connect to mango db deployment which is secured with password and we haven’t set up mangoDB password in AKS cluster yet. Without which mango db deployment and contacts-backend deployment doesn’t start up. Setting up of password securely inside AKS cluster is an one time step and is covered under the Secrets section.
  2. Our application still cannot be reachable from outside world as we haven’t set up any routing to route external traffic in to nodes within cluster. Ingress section covers the details on setting up routing in Kubernetes to allow external traffic in to the running nodes.

More about Contacts Kuberenets Deployment:

Kubernetes deployments are declarative in nature. i.e we only mention about the desired state of our application and Kubernetes API server will be working constantly make the desired states a reality. Deployment in Kubernetes is applying yaml script to kube server via kubectl control commands.

Below are the kubernetes object types we are using in our contact application.

Kubernetes Object types

You can refer Kubernetes documentation here for understanding and managing Kubernetes objects.

In our contacts app we are going to create following Kubernetes objects via yml configurations.

Deployment Object : Deployment objects are used to create Pods running containers. In our case we have contacts-backend, contacts-frontend and mangoDB deployment objects with corresponding images running inside containers.

Structure of the deployment object file type looks like below

Deployment file structure

Service Objects: To provide networking to deployment objects, in other words to access above deployment objects and set of pods we need to have service object types created.

Structure of the service object type looks like below

Service file structure

Volume Objects: Life cycle of a pod is associated with life cycle of deployment i.e by any means if the deployment is terminated then associated pod gets terminated. In case of pod running database container, if the pod terminates, then we lose the underlying data, So we need to separate data store outside of the pod and Persistent Volume Claims and Persistent Volume objects in Kubernetes serve this exact purpose.

Persistent volume Claim(PVC) is the requirement laid out by pod to Kubernetes API server requesting the required storage space.

Persistent Volume in other hand is the actual volume created by Kubernetes API on the basis of claim.

Deployment specification file specifies the need for the persistent volume claim and also how this volume once created need to be mounted in the running container.

A sample PVC look like below

PVC file structure

In Contact app, the mango-deployment file requiring the data storage specifies about the persistence volume claim in the POD specification like below

spec:volumes:- name: mango-storagepersistentVolumeClaim:claimName: mango-persistent-volume-claim

and specifies in the container section how this storage should be mounted.

volumeMounts:- name: mango-storagemountPath: /data/db

in our case we are mounting the storage volume to /data/db inside our container, which is the default path where mangodb stores the data

Ingress

Kubernetes ingress is a native resource to add the routes to route traffic from external load balancer to cluster IP services inside the Kubernetes cluster.

Its a resource definition of how various route requests should be routed to different clusterIPs of our running contacts application.

In our case, we have 2 simple rules configured as a ingress resource.

  1. All the requests starting with path /backend/* should be routed to our backend clusterIP
  2. All other requests (*) should be routed to front end clusterIP

Ingress Controller

Ingress Controller is a proxy service deployed inside the Kubernetes cluster responsible for reading the ingress resource information and process the incoming requests accordingly. Ingress controller manages routing from load balancers in to running pods. Controller makes the routing resource definition a reality.

Essentially ingress controller is a deployment which talks to Kubernetes ingress API to check if there are any rules configured and applies this to its configuration to support the routing.

Typically ingress controller is implemented by third party and for our case we rely on nginx ingress controller by google kubernetes.

We need to first create and configure nginx ingress controller in order for our ingress resource configuration to work.

Please Refer nginx ingress controller deployment to set up nginx ingress controller.

in GKE, nginx controller can be installed using helm which is a package manager to Kubernetes (imagine like npm for nodejs applications).

To install helm and ingress controller we have to run these commands once inside our running cluster.

Easiest way would be to login to our running cluster in GKE is to open up a cloud shell

Once we activate cloud shell we can install helm as per the below official link (installing helm through the script)

Once installed we can install nginx Controller from the official link Using helm section.

This nginx controller will create a Google Cloud Load Balancer automatically for us and also creates the ingress pod to support routing of the incoming requests.

Secrets : Secrets in Kubernetes is the information which we want to store securely inside cluster. This can then be made available to running containers via environment variables. In our case we need contacts backend pod to connect to secured mango DB pod with the secret password supplied as environment variable.

For our application we use one secret called mangopassword to connect to mangodb and we can create this secret inside running cluster using below command in the cloud shell.

Kubectl create secret <type of secret> < name of secret identifier> — from-literal <Name of Key>= <value to be associated>

Ex: kubectl create secret generic mangopassword –from-literal MANGOPASSWORD=test111

With all these changes in place and after successful checkin of all our deployment files to github master, Travis continuous integration gets kicked off which builds images and configures our deployment in GKE with updated image.

With the successful deployment we can check the load balancer in google cloud console and get its public IP address.

If all of the above steps are done correctly, you can visit our contact web page and it should start serving the requests.

Raghavendra Bhatt
Technical Architect, Capgemini USA
LinkedIn

--

--