Part 2: Deploying Lagom with a CI/CD Pipeline on Kubernetes

Corey Auger
12 min readMar 20, 2019

In my last post I demonstrated how quickly you can develop a highly scalable application in Lagom. In this post I want to create an auto deployment pipeline for that project. By the end of this post you will be able to make commits to master and have those changes be automatically built and deployed into your kubernetes cluster. Super cool!

Source Code

You will need to fork and checkout my github repo to follow along with this post.

Overview

Let’s briefly describe the components that make up our application. The Lagom application from my last post consists of:

  • A Cassandra database for storing Entities.
  • A Kafka bus for publishing events.
  • Akka clustering (for uh .. Clustering)
  • Our 2 micro-services: Gateway Service, Project Manager Service

In addition to this we will be deploying to a kubernetes cluster with some additional CI/CD needs. These include:

  • minikube (kubernetes localhost)
  • Helm (installing helm charts on kubernetes)
  • Jenkins (Build and deploy code)
  • Docker Registry (to hold our compiled docker images)

This post will take us through everything that is required: first setting up our kubernetes cluster, followed by deploying Kafka / Cassandra, deploying our project and finally setting up the CI/CD

Kubernetes Nomenclature

Before we continue it is worth defining a few key terms that are used frequently in the post and when dealing with kubernetes:

Container

A lightweight and portable executable image that contains software and all of its dependencies

Image

Stored instance of a container that holds a set of software needed to run an application

Kubectl

A command line tool for communicating with a Kubernetes API server.

Minikube

A tool for running Kubernetes locally

Node

A node is a worker machine in Kubernetes

Pod

The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster

Service

An API object that describes how to access applications, such as a set of Pods, and can describe ports and load-balancers

Deployment

An API object that manages a replicated application

For a complete list see: https://kubernetes.io/docs/reference/glossary/?fundamental=true

Getting started

If you haven’t yet you will need to fork and checkout the Lagom project on github:

https://github.com/coreyauger/tasktick

Most of the deploy scripts that we will be working with in this post are located inside the “deploy” folder of the project.

Kubernetes Setup (minikube)

minikube

I will be working with kubernetes locally using minikube. Minikube will simulate a production cluster on our local system. If you don’t yet have minikube on your system, you will need to install it now. Follow the installation steps for you your OS here:

https://kubernetes.io/docs/tasks/tools/install-minikube/

Once you have minikube and verified the install was a success you can start you minikube with the following:

➜ ~ minikube start — cpus 4 — memory 8192

Note: that I use hyperkit and thus required the arg: — vm-driver=hyperkit

kubectl

kubectl is your command line interface for working with a kubernetes cluster. It does not matter if this is minikube, a production kubernetes cluster, openshift, etc. Anything built on top of the kubernetes. Think of kubectl as the client to the kubernetes api.

To install kubectl, follow the installation steps here: https://kubernetes.io/docs/tasks/tools/install-kubectl/

helm

Helm is a package manager for kubernetes. In keeping with the nautical naming of kubernetes helm deploys packages called “charts” (helm charts). Helm, consists of 2 parts: a helm server that sits inside kubernetes called “tiller” and the helm client (akin to kubectl). We will be using helm to install the and manage Kafka.

To install helm on minikube, follow the steps here:

https://helm.sh/docs/using_helm/#installing-helm

Deploying Dependencies

Kafka

Thanks to helm, deploying kafka could not be easier. This is extremely nice since manually deploying kafka into the cluster would be much harder than this. To fully appreciate everything that is being done for us you can visit: https://strimzi.io/ and take a look through the documentation.

There are a ton of things to consider when deploying and managing kafka on a production cluster. Using the strimzi operator makes this considerably easier to manage.

To install the stimzi helm chart we need to issue the following commands:

➜ helm repo add strimzi http://strimzi.io/charts/

helm repo update

➜ helm install — namespace kafka — name tasktick-kafka strimzi/strimzi-kafka-operator — debug

Verify that the kafka operator has been started:

➜ helm ls

Now let’s create a single broker kafka cluster with persistence provided by a kubernetes PersistentVolume. This can be done by executing the scrupt I provide in the Lagom “tasktick/deploy” directory.

➜ cd tasktick/

➜ kubectl apply — namespace=kafka -f deploy/kafka-persistent-single.yaml

Note: In production, you would want to run at least 3 kafka brokers on designated nodes.

Verify that we see our kafka / zookeeper pods up and running:

➜ kubectl get pods — namespace=kafka

Note: that it might take a minute until you see all of them

Cassandra

For our Lagom project we made the choice to use Cassandra to store our persistent entities. We could have also chosen from a number of other providers: postgres, couchbase, etc. Cassandra is in my opinion the best choice. However, we don’t yet have a stable helm chat that people would recommend using in production. There is a fair bit of debate on whether or not you should run your Cassandra cluster inside kubernetes at all.

We will not worry about this for our local test but it is something that I think you should be aware of for your production cluster.

For now we will go ahead and use the experimental helm chart.

➜ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/

➜ helm repo update

➜ helm install -f deploy/cassandra-values.yaml — version 0.10.2 — namespace “cassandra” -n “cassandra” incubator/cassandra

This will create a single Cassandra node in the cassandra namespace. Make sure that we see the pod up and running with the following:

➜ kubectl get pods — namespace=cassandra

We can also do a final check of the Cassandra internals with the following command:

➜ kubectl exec -it — namespace cassandra $(kubectl get pods — namespace cassandra -l app=cassandra,release=cassandra -o jsonpath=’{.items[0].metadata.name}’) nodetool status

Akka Cluster

Akka has awesome support for kubernetes. This includes cluster seed discovery and whole host of useful features. To learn more about Akka Management you can take a look through the documentation here:

https://developer.lightbend.com/docs/akka-management/current/

To get things rolling we first need to include the dependencies in our build.sbt

The last line of the above groups all the deps together so that now we can simply append to the libraryDependencies of both the “impl” projects. This can be seen here:

Now that we have included the dependencies we have to tell akka management when to bootstrap the cluster. Note that you will no longer have to do this with Lagom 1.5, it will be taken care of for you under the hood.

You can add this small bit of code right near the top of your service initialization. Here is an example of where I have added it for the ProjectManagerService.scala

Akka management needs to some additional permissions from kubernetes in order to properly automate bootstrapping. Without it you would get the following error message when trying to form a cluster:

Forbidden to communicate with Kubernetes API server; check RBAC settings. Response: [{“kind”:”Status”,”apiVersion”:”v1",”metadata”:{},”status”:”Failure”,”message”:”pods is forbidden: User \”system:serviceaccount:default:default\” cannot list pods in the namespace \”default\””,”reason”:”Forbidden”,”details”:{“kind”:”pods”},”code”:403}

To make the permission change there are 3 things that need to be done:

  • Create a Role for the akka cluster management
  • Create a Service Account
  • Bind the Role to the Service account.

Create a role:

➜ kubectl apply — namespace=kafka -f deploy/akka-cluster-member-role.yaml

Create Service Accounts:

➜ kubectl apply — namespace=kafka -f deploy/projectmanager-sa.yaml

➜ kubectl apply — namespace=kafka -f deploy/gateway-sa.yaml

Bind the Role to the Service Accounts:

➜ kubectl apply — namespace=kafka -f deploy/projectmanager-sa-role-binding.yaml

➜ kubectl apply — namespace=kafka -f deploy/gateway-sa-role-binding.yaml

Deploying our Lagom Services

There are a number of steps to creating and exposing our services inside kubernetes:

  • Adding a Service Locator for service discovery
  • Create a docker image for both services
  • We will also need to add this to a registry that kubernetes has access to

Create a “Deployment”, including:

  • information to create the pod
  • Docker image file to use
  • Scaling profile
  • Environment setup (including ports)

Next we create a “Service”

  • This groups our pods into a service
  • Exposes a port on every node that we can use to talk to our service.

Service Locator

One thing that was taken care of when we ran our project locally was Service Discovery (This is different then akka bootstrap node discover). Now that we are running inside kubernetes we will need to provide a production implementation for this. For our project it is enough to simply use application.config to define our service locations. To accomplish this we first need to add the “ConfigurationServiceLocatorComponents” in both our: GatewayLoader.scala, and ProjectManagerLoader.scala

Now inside our application.conf we can define the location of the other service as follows:

A few things are worth noting here.

  • The name “projects” in the case above is defined in the “ProjectManager.scala” of the API project seen here:
  • The url that we give it “http://projectmanager-svc.default:9000" was achieved by using the name “projectmanager-svc” defined in the deploy/projectmanager/projectmanager-service.yaml combined with the “default” namespace we deployed in kubernetes, and finally using port 9000 that we expose via a NodePort

Creating a Docker Image

Install

If you don’t already, you will need docker installed on your machine.

https://docs.docker.com/v17.12/install/

Create Image

Docker image creation is handled by our sbt-native plugin. This was included by default when we created our Lagom project. If for some reason you need to add the plugin consult the documentation for sbt-native.

One additional setup I added to the bottom of my build.sbt was to indicate the base image to use for the docker container. In this case:

For ease of use and testing minikube adds a way of bridging the docker environment to your local machine. To do this we can open a terminal and execute the following:

➜ eval $(minikube docker-env)

Now when you execute “docker images” you should see a number of images that exist inside minikube

docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 11 months ago 97MB

k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 11 months ago 148MB

Now you are ready to publish both the service images. To do this enter your sbt shell navigate to the “impl” projects and run docker:publishLocal

sbt

sbt> project gateway-impl

sbt> docker:publishLocal

sbt> project projectmanager-impl

sbt> docker:publishLocal

If all goes well you should now see both images listed when you do a “docker images” command.

Deploy the Services

Now that you have our docker images you can run both the deploy scripts and make sure that things are working.

NOTE: my final checked in scripts are for the CI/CD pipeline that we get to at the end of the post. You therefore will need to adjust the lines for the “image” in the deployment scripts to use the image you created above. That is you will need to change:

image: 127.0.0.1:30400/projectmanager-impl:1.0-SNAPSHOT

To

image: projectmanager-impl:1.0-SNAPSHOT

For both deployments.

Deploy Project Manager

Let us first deploy the project manager service.

➜ kubectrl apply -f deply/projectmanage/projectmanager-deploy.yaml

➜ kubectrl apply -f deply/projectmanage/projectmanager-service.yaml

To check on the deployment we can list the pods

➜ kubectl get pods

NAME READY STATUS RESTARTS AGE

projectmanager-<pod-id> 1/1 Running 0 now

We can no view the state of the pods or the log files with the commands:

➜ kubectl describe pods projectmanager-<pod-id>

➜ kubectl describe logs projectmanager-<pod-id>

Assuming that the logs look correct at this point:

  • We have formed an akka cluster
  • We are properly talking to our Cassandra database
  • Kafka is behaving

We should now perform a quick test of the service itself. Let’s do this by forwarding a port from the pod and hitting it with “curl” to test the endpoint.

➜ kubectl port-forward projectmanager-<pod-id> 9000:9000

➜ curl -XPUT http://localhost:9000/api/project/add -d ‘{“name”: “test”, “owner”: “62f1e22a-44f6–11e9-b210-d663bd873d93”, “team”:”62f1e22a-44f6–11e9-b210-d663bd873d93", “description”: “test”}

Again checking the logs for the pod should be enough to verify that things are working correctly.

Deploy Gateway Service

Exactly as you have done before for the Project Manager Service you need to create a “deployment” and a “service” for our Gateway Service. Let’s run both scripts:

➜ kubectrl apply -f deply/gateway/gateway-deploy.yaml

➜ kubectrl apply -f deply/gateway/gateway-service.yaml

We again should check the pod “explain” and “logs” to make sure that everything looks normal.

We now have bother services added to kubernetes. Adding more pods of either container will let us scale up our service. Very Cool !

Our last step is exposing the Gateway Service to the outside world so we can hit the web app endpoints. One of the ways to accomplish this in kubernetes is by adding an “Ingress”. Here is an excellent post that explains the difference between: Nodeport, Loadbalander, and Ingress

https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0

Adding Ingress

The first thing we need to do for minikube is to enable the ingres nginx addon:

➜ minikube addons enable ingress

Let’s make sure it is running by running:

➜ kubectl get deployments — all-namespaces

You should see nginx-ingress-controller

With our ingress controller running we can now run our configuration for it.

➜ kubctl apply -f deploy/tasktick-ingess.yaml

Verify that everything looks ok with the command.

➜ kubectl get ingress

One of the things we added to our ingress was to filter for hosts “tasktick.io”. So now we need to add an entry in our /etc/hosts that maps this to localhost

➜ sudo vim /etc/hosts

127.0.0.1 tasktick.io

We should be now able to point our browser to http://tasktick.io and see our web application. Congratulations !! If there are any problems work your way backwards using the logs to address any issues, before moving on to the last section.

Continuous Delivery

The following CI/CD pipeline was adapted from the blog post:

https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2

The last step is to automate the build and deploy of our web application. There are a few things we need to do in order to take control of this process.

  • Deploy a Docker Registry inside our kubernetes cluster
  • Deploy Jenkins into kubernetes
  • Configure Jenkins
  • Profit !

Deploy a Docker Registry

I have provided a script that will deploy a web registry inside minikube, simply run

➜ kubctl apply -f deploy/cicd/registry.yaml

You should now be able to deploy docker images to the new web registry. Let’s first build a Jenkins image and deploy it to our new web repository.

Note: you might need to use “socat” to proxy your docker registry. (I did not have to, but some have)

Deploy Jenkins

Grab the latest version of Jenkins from docker hub and deploy it to our new web repository.

➜ docker pull jenkins:latest

➜ docker build -t 127.0.0.1:30400/jenkins:latest -f deploy/cicd/jenkins/Dockerfile deploy/cicd/jenkins

Now that we have a Jenkins image inside our kubernetes repository we can deploy it with the script:

➜ kubectl apply -f deploy/cicd/jenkins/jenkins.yaml

Finally we need to grab the admin password for Jenkins with the following:

➜ kubectl exec -it `kubectl get pods — selector=app=jenkins — output=jsonpath={.items..metadata.name}` cat /var/jenkins_home/secrets/initialAdminPassword

Finally we can hit our Jenkins service and begin to configure our build and deploy.

➜ minikube service jenkins

Go through the setup process choosing the default plugins.

Once you are inside Jenkin we need to add the Jenkins “sbt” plugin.

Before we create a pipeline, we first need to provision the Kubernetes Continuous Deploy plugin with a kubeconfig file that will allow access to our Kubernetes cluster. In Jenkins on the left, click on Credentials, select the Jenkins store, then Global credentials (unrestricted), and Add Credentials on the left menu

The following values must be entered precisely as indicated:

Kind: Kubernetes configuration (kubeconfig)

ID: kenzan_kubeconfig

Kubeconfig: From a file on the Jenkins master

File: /var/jenkins_home/.kube/config

Finally click Ok.

Configure Jenkins Build

The final step is to configure the build.

Create a new build script and add the following steps:

Github step pointing at your fork of the repository:

https://github.com/<your_github_user>/tasktick

Add the pull policy for every 5 minutes:

H/5 * * * *

Add an sbt step with the sbt command set to

“gateway-impl/docker:publishLocal”

Add a shell script step with the command

docker push 127.0.0.1:30400/gateway-impl:1.0-SNAPSHOT

Add a kubernetes pipeline step with the file

deploy/gateway/*.yaml

Save your build script. Trigger is and make sure that it does indeed fetch and deploy the gateway service. When this is done repeat the steps above for the “projectmanager” project.

Conclusion

At this point you should get yourself a warm cup of tea and sit back with your feet up triggering a few deploys. What we have achieved here is a fully automated deploy of a hyper scalable web application. Not only can we iterate on features; deploying into production daily, but we can easily scale this application to support millions of clients. Well Done !

--

--