From Monolith to Microservice Architecture on Kubernetes, part 3— Deploying our Scala app as a microservice

Jeroen Rosenberg
Jeroen Rosenberg
Published in
5 min readJul 19, 2017

In this blog series we’ll discuss our journey at Cupenya of migrating our monolithic application to a microservice architecture running on Kubernetes. In the previous parts of the series we’ve seen how the core components of the infrastructure, the Api Gateway and the Authentication Service, were built. In this post we’re going to see how we converted our main application to a microservice and get to a fully working setup in Kubernetes which we could go live with.

Parts

Migrating our application

So we’ve explored the core components in our microservice architecture, the Api Gateway and the Authentication Service. We’ve looked at quite a bit of Scala code and have seen how the Kubernetes deployments are configured. I started this blog series with describing our current software stack. Remember, from a deployment perspective it looks rather traditional

Let’s see what it took to migrate our monolith to the new microservice infrastructure and benefit from

  • Increased agility and smaller & faster deployments
  • Individually scalability of services
  • More fine grained control over service SLA’s (i.e. some services are crucial and need to have failover in place and some are not)
  • The ability to have autonomous teams responsible for a subset of services

Migrating our frontend application

For the frontend it was pretty straight forward. We basically needed another nginx pod, similar to the one we used as ingress with the only difference that it needs to have the static resources bundled. Therefore, we created a custom docker image where we copy the contents of the dist output folder of our AngularJS application.

FROM nginx
COPY dist /usr/share/nginx/html
COPY frontend-nginx.conf /etc/nginx/conf.d/default.conf

This docker image we simply pulled in our pod and created an associated Kubernetes service. It didn’t need any special config because the Api Gateway was already configured to route all non /api traffic to this service.

Migrating our backend application

For the backend we started off simple. Let’s not immediately split up into a million tiny microservices, but let us begin by converting our application as a single Kubernetes service. It might be missing the point a little bit, but it’s a great place to start. Besides it shouldn’t be hard to build upon this proof-of-concept and split up our app into multiple microservices. Slowly we will learn where it makes sense to make the divisions.

To deploy our application in Kubernetes we first would need to build a container image out of it. We chose Docker and since the main application was a Scala project we could use the Docker plugin of the SBT Native Packager so we didn’t have to write a custom Dockerfile (We also have a few services written in Python where we manually wrote the Dockerfile). In our build.sbt we just had to apply a few settings.

We use the oracle-jdk:jdk-1.8 base image. Since we use Google Container Engine (GKE) we have to set our dockerRepository to eu.gcr.io and ensure our packageName is of the format $dockerRepoName/$appName (e.g. my-docker-repo/api-gateway). We also generate a short commit hash from git to be used as docker image version.

With this config building the Docker image is part of the build process and can be easily executed in a continuous integration environment like Jenkins. Of course we still needed to build a Kubernetes descriptor file and deploy it to run our app in Kubernetes. We’ll get to how we automated that process in part 3 of this blog series. For now let’s just have a look at what a basic descriptor file for our new microservice looks like.

There we have it. Our first microservice which is, as explained earlier, not so micro. It just pulls in the created docker image for the whole Scala application, based on the git commit hash. It gets the authentication secret, needed to decode the JWT to obtain an authentication context, as an environment variable from a Kubernetes secret. We made sure we kept the same format for the authentication context and made sure the already handed out reference tokens remained valid. We don’t want our clients to notice us moving to a new infrastructure. It was not difficult to port this logic, though. In the end we were still keeping this information in the same database.

Connecting to an external database

One part we’re still missing is the connection to the database. As you can see in the descriptor file we expect the database, in this case MongoDB, to be available by a Kubernetes service called mongo-svc on port 27017. Since our database is not running inside pods, but is managed outside of the Kubernetes cluster, we needed a simple proxy service.

The trick here is that we define a Kubernetes service without a pod selector. By convention those type of services will bind to a Kubernetes Endpoint with the same name. Therefore we create a manual Endpoint descriptor and specify the address of the mongo or mongos (MongoDB’s routing service for sharded clusters) server. In case of, say, an Elasticsearch cluster you could also specify multiple IPs and have the Kubernetes service handle the load balancing.

Deploying the whole shebang

The final step to get our first microservice deployed and made available through the Api Gateway is to setup a Kubernetes service for it. However, currently as you’ve probably noticed the Api Gateway has a ‘limitation’ to register only a single resource per service. Since our first microservice is actually the whole application with many routes and served resources we had to work around that. We simply defined a Kubernetes service for each resource we wanted to serve and point them all to the same pod. Here’s a few examples.

As you can see the defined resource is different, but the all the pod selectors point to the cupenya-microservice pod. Therefore, the Api Gateway will register them as separate services and proxy requests to the corresponding REST Api in the cupenya-microservice pod. All we have to do now is tell Kubernetes to create our resources. You can keep the descriptor files separate or bundle them all in one file and run:

$ kubectl --namespace my-namespace apply -f my-descriptor-file.yaml

The order of operations doesn’t really matter much in this case. Kubernetes will create or update your specified resources in the namespace of choice and they will auto wire themselves into an operational system.

Well, that’s all there is to it to get our rather minimalistic setup running. In the next post we’re gonna build upon this and discuss other crucial aspects of a production system such as monitoring & health checks.

Thanks for reading my story. If you like this post, have any questions or think my code sucks, please leave a comment!

--

--

Jeroen Rosenberg
Jeroen Rosenberg

Dev of the Ops. Founder of Amsterdam.scala. Passionate about Agile, Continuous Delivery. Proud father of three.