Kubernetes : Deploying Angular + Spring Boot Application in Microsoft Azure Cloud

Raghavendra Bhat
10 min readDec 7, 2019

--

Recently Kubernetes has gained great popularity and has now become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments.

In this article we will be looking at deploying a simple contacts demo web application built using Angular 8 front end and JAVA Spring Boot backend with mangoDB as a storage.

This is the simple demo application demonstrating basic crud operations i.e create new contact, list all the contacts, update contact and delete contact.

The reason for tech stack of angular, Spring Boot and mangoDB choosen here is just because they are widely popular and many applications in the real world are currently built using these tech stacks.So this can be a starter project for someone who is building enterprise applications. You can checkout the complete code from the below github repository( With Okta enabled) or without Okta enabled.

If you are looking at deploying in to Google Cloud (GKE) please refer Kubernetes: Deploying Angular + Spring Boot application in Google Kubernetes Engine (GKE) article.

At the end we will have an application in Azure cloud with below architecture

ContactsApp Kubernetes architecture

This article assumes that you have basic knowledge about the cloud platform.

Here are few good articles/video links to understand what Kubernetes is and what it offers to help develop true cloud native apps.

https://www.youtube.com/playlist?list=PLLasX02E8BPCrIhFrc_ZiINhbRkYMKdPT

https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes

https://medium.com/containermind/a-beginners-guide-to-kubernetes-7e8ca56420b6

Here is the simple visual diagram showing the various phases/steps involved to achieve our end goal.

Overall steps involved

Development :

There are 2 parts of articles which cover the development of this contact application in great detail.

Part 1 : Spring Boot and Angular web application walks you through the development of base contact application

Part 2: Securing Angular + Spring Boot Application with Okta walks you through securing the base contact application with Okta.

Alternatively you can grab the code for part 1 & part 2 from GitHub.

You can choose just the base app for deploying in to the cloud and implement different security models to secure your application in cloud or you can choose same set up what I have currently. In both cases, Deployment process covered in this article is same for both versions.

Dockerizing Contacts Application

Dockerizing our application involves building our application docker images.

Create contacts backend docker image:

Here is the sample docker file to create image from our Spring Boot application once we build our application via maven/gradle build.

you can refer docker documentation to understand the structure of docker file.

FROM openjdk:8-jdk-alpineVOLUME /tmp# ARG JAR_FILECOPY ./target/contacts-backend.jar app.jarENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

Create frontend docker image

The docker file used to build angular frontend application is below

FROM node:alpine as builderWORKDIR '/app'COPY ./package.json ./RUN npm installCOPY . .RUN npm run buildFROM nginxEXPOSE 4200COPY ./nginx/default.conf /etc/nginx/conf.d/default.confCOPY --from=builder /app/dist /usr/share/nginx/html

In this case we first use node alpine image to build our front end application and then use nginx image to build our final frontend image, 2 step process.

We are using nginx as a base image because we need a web server to serve HTML/Javascript requested by users and nginx exactly does that. You can refer details about nginx nicely explained in below articles or official nginxdocumentation

https://kinsta.com/knowledgebase/what-is-nginx/

https://medium.com/javarevisited/nginx-better-and-faster-web-server-technology-72ce5ad6305a

we use below nginx config file to support our angular routing in

default.conf file

server {listen 4200;location / {root /usr/share/nginx/html/contactsApp;index index.html index.htm;try_files $uri $uri/ /index.html =404;}}

Version Control:

We are using GIT source control to develop and checking in the code.

Continuous Integration:

Since we are deploying the application in Microsoft Azure platform, we will be using Azure DevOps for the continuous build and deployment in to Azure Kubernetes Cluster. Azure pipeline helps automate the entire process of building and deploying our application in to the production cluster.

Azure pipeline builds the artifacts and deploys it based on the pipeline configuration definition defined as a yml file. Azure build agent reads the yml configurations and executes each task definition.

prerequisite

  1. Create Microsoft Azure account and link billing account

You can refer below links for the same

Please note that you need credit card to set up account and to create Kubernetes cluster. Azure provides $200 free credit to experiment with the cloud platform.

2) Create Azure Kubernetes cluster as per below official link.

Above official site explains how to create a cluster and deploy an application by applying yml scripts directly. But for our example, we will be using Azure pipelines to build our artifact, create docker image with latest build tag and push built images to docker container registry and then deploy to AKS. We will be automating entire build and deployment processes through Azure pipelines.

Please refer below link for creating an azure pipeline to deploy to AKS. In order to use ‘Deploy to Kubernetes Service’ task you need to make sure multi stage pipeline preview feature is turned on.

In our example we create pipeline project as per above link but the task definitions are slightly different. Visual representation of Pipeline steps of our contact application looks like below.

Azure pipeline configuration

Please grab the pipeline configuration file for contacts App from here.

The above pipeline task definitions essentially downloads the code from Github, builds and creates docker images and pushes the images to Docker Registry. After the images are pushed, build agent runs the kubernetes deployment task and runs Kubectl apply command to create our Kubernetes objects inside our AKS.

Service Connections

Azure pipeline authenticates with Docker hub and AKS clusters using service connections. To set up a service connection inside build pipeline, follow below link.

Once the service connections are set up we can refer these connections in the build tasks to authenticate via service connections.

In case of contacts application, I have created following service connections for Kubernetes cluster, docker hub and git hub respectively.

With above configurations our pipeline is ready and you can commit to git hub and run the pipeline which builds and deploys the application to Azure AKS.

You can now verify the Kubernetes pods and services created under AKS using kubectl get commands. You can refer all kubectl commands in the kubectl cheat sheet.

Our application still has few configurations missing.

  1. Our contacts-backend needs to connect to mango db deployment which is secured with password and we haven’t set up mangoDB password in AKS cluster yet. Without which mango db deployment and contacts-backend deployment doesn’t start up. Setting up of password securely inside AKS cluster is an one time step and is covered under the Secrets section.
  2. Our application still cannot be reachable from outside world as we haven’t set up any routing to route external traffic in to nodes within cluster. Ingress section covers the details on setting up routing in Kubernetes to allow external traffic in to the running nodes.

More about Contacts Kuberenets Deployment:

Kubernetes deployments are declarative in nature. i.e we only mention about the desired state of our application and Kubernetes API server will be working constantly make the desired states a reality. Deployment in Kubernetes is applying yaml script to kube server via kubectl control commands.

Below are the kubernetes object types and their meaning.

Kubernetes Object types

You can refer Kubernetes documentation here for understanding and managing Kubernetes objects.

In our contacts app we are going to create following Kubernetes objects via yml configurations.

Deployment Object : Deployment objects are used to create Pods running containers. In our case we have contacts-backend, contacts-frontend and mangoDB deployment objects with corresponding images running inside containers.

Structure of the deployment object file type looks like below

Deployment file structure

Service Objects: To provide networking to deployment objects, in other words to access above deployment objects and set of pods we need to have service object types created.

Structure of the service object type looks like below

Service file structure

Volume Objects: Life cycle of a pod is associated with life cycle of deployment i.e by any means if the deployment is terminated then associated pod gets terminated. In case of pod running database container, if the pod terminates, then we lose the underlying data, So we need to separate data store outside of the pod and Persistent Volume Claims and Persistent Volume objects in Kubernetes serve this exact purpose.

Persistent volume Claim(PVC) is the requirement laid out by pod to Kubernetes API server requesting the required storage space.

Persistent Volume in other hand is the actual volume created by Kubernetes API on the basis of claim.

Deployment specification file specifies the need for the persistent volume claim and also how this volume once created need to be mounted in the running container.

A sample PVC look like below

PVC file structure

In Contact app, the mango-deployment file requiring the data storage specifies about the persistence volume claim in the POD specification like below

spec:volumes:- name: mango-storagepersistentVolumeClaim:claimName: mango-persistent-volume-claim

and specifies in the container section how this storage should be mounted.

volumeMounts:- name: mango-storagemountPath: /data/db

in our case we are mounting the storage volume to /data/db inside our container, which is the default path where mangodb stores the data

Ingress

Kubernetes ingress is a native resource to add the routes to route traffic from external load balancer to cluster IP services inside the Kubernetes cluster.

Its a resource definition of how various route requests should be routed to different clusterIPs of our running contacts application.

In our case, we have 3 simple rules configured as a ingress resource.

  1. All the requests starting with path /backend/* should be routed to our backend clusterIP
  2. All the requests starting with path /backend-api/* should be routed to our backend-api-server clusterIP (API services)
  3. All other requests (*) should be routed to front end clusterIP

Ingress Controller

Ingress Controller is a proxy service deployed inside the Kubernetes cluster responsible for reading the ingress resource information and process the incoming requests accordingly. Ingress controller manages routing from load balancers in to running pods. Controller makes the routing resource definition a reality.

Essentially ingress controller is a deployment which talks to Kubernetes ingress API to check if there are any rules configured and applies this to its configuration to support the routing.

Typically ingress controller is implemented by third party and for our case we rely on nginx ingress controller by google kubernetes.

We need to first create and configure nginx ingress controller in order for our ingress resource configuration to work.

Please Refer nginx ingress controller deployment to set up nginx ingress controller.

in AKS, nginx controller can be installed using helm which is a package manager to Kubernetes (imagine like npm for nodejs applications).

To install helm and ingress controller we have to run these commands once inside our running cluster.

Easiest way would be to login to our running cluster in AKS is to open up a cloud shell

Once we activate cloud shell we have to connect to our cluster to install helm as per the below official link (installing helm through the script)

Once installed we can install nginx Controller from the official link Using helm section.

This nginx controller will create a Azure Load Balancer automatically for us and also creates the ingress pod to support routing of the incoming requests.

Secrets
Secrets in Kubernetes is the information which we want to store securely inside cluster. This can then be made available to running containers via environment variables. In our case we need contacts backend pod to connect to secured mango DB pod with the secret password supplied as environment variable.

For our application we use one secret called mangopassword to connect to mangodb and we can create this secret inside running cluster using below command in the cloud shell.

Kubectl create secret <type of secret> < name of secret identifier> — from-literal <Name of Key>= <value to be associated>

Ex: kubectl create secret generic mangopassword –from-literal MANGOPASSWORD=test111

With all these changes in place Azure pipeline builds the code pushes the images and deploys the docker images in to AKS cluster.

With the successful deployment we can check the load balancer in Azure portal and get its public IP address.

If all of the above steps are done correctly, you can visit our contact web page and it should start serving the requests.

--

--