Developing and Deploying Ballerina Microservices in Kubernetes in 5 Minutes

Rajkumar Rajaratnam
11 min readSep 17, 2018

--

It’s easier now than ever before to develop and deploy microservices in containers so quick with Ballerina.

We are going to develop a Ballerina microservice and deploy it in Kubernetes in just 5 minutes. Thanks to Ballerina for making all this possible by providing builtin support for automatically generating Docker and Kubernetes artifacts from the annotations.

Prerequisites

  • Ballerina : If you don’t have Ballerina installed in your system, refer this official doc and set this up.
  • Kubernetes : If you don’t have a Kubernetes cluster up and running, go grab my Vagrantfile and configure a multi node Kubernetes cluster in VirtualBox.
  • Local Kubectl should have access to your remote Kubernetes cluster. This is required because we are going to deploy the Kubernetes artifacts from local machine to a remote cluster. If you haven’t set this up, read my previous blog and do it.
  • Understanding of my previous blog on how to deploy Ballerina Microservices in Docker is mandatory.

Developing a Ballerina Microservice

I’m going to be using the same service that I developed in my previous blog on how to deploy Ballerina Microservices in Docker.

Generating Kubernetes Artifacts Automatically

Ballerina provides builtin support for automatically generating Docker and Kubernetes artifacts as part of the build process. You just need to annotate your Ballerina program with Docker and Kubernetes annotations and Ballerina build process will generate Docker and Kubernetes artifacts for you. Very convenient.

Let’s annotate our service with Kubernetes annotations. Note that you don’t need docker annotations anymore, because Kubernetes annotations are self-contained and it has its own annotations for generating Dockerfile and Docker image.

Let’s build it and see the outputs.

$ ballerina build utility.bal
Compiling source
utility.bal
Generating executable
utility.balx
@kubernetes:Service - complete 1/1
@kubernetes:Ingress - complete 1/1
@kubernetes:Deployment - complete 1/1
@kubernetes:Docker - complete 3/3
Run the following command to deploy the Kubernetes artifacts:
kubectl apply -f /Users/raj/projects/medium-blogs/ballerina-in-k8s/kubernetes/

This time ballerina build command has done four things for me. 1) It has generated the compiled binary. 2) It has generated the Dockerfile. 3) It has built the Docker image. 4) It has generated the Kubernetes artifacts.

$ tree
.
├── kubernetes
│ ├── docker
│ │ └── Dockerfile
│ ├── utility_deployment.yaml
│ ├── utility_ingress.yaml
│ └── utility_svc.yaml
├── utility.bal
└── utility.balx
$ docker images
REPOSITORY TAG IMAGE ID
utility v3 a73bbdde85e4

Let’s move on to making this Docker image available to our Kubernetes cluster.

Loading Docker Image in Kubernetes Cluster

Before deploying Kubernetes artifacts, we need to make sure Docker runtime within the Kubernetes cluster is able to find our Docker image.

If you have a Docker registry (private or DockerHub), you could have annotated your Ballerina program so that it can automatically push the Docker image to the registry for you. Then the Docker runtime within our Kubernetes cluster can pull the image from this Docker registry. I don’t have a Docker registry configured (I could’ve used DockerHub, but I intentionally ignored it to experiment a different way of loading docker images), so I’m going to load the Docker image manually in the Kubernetes nodes.

Save the Docker image as a tar ball with the following command.

$ docker save utility:v3 > utility-v3.tar
$ ls
kubernetes utility-v3.tar utility.bal utility.balx

You have to copy this tar ball to the Kubernetes nodes. You can either scp or utilize the vagrant mounting. If you are using my Vagrantfile to provision your Kubernetes cluster, your project directory (where the Vagrantfile is located) is mounted to the /vagrant directory of the Kubernetes nodes. In other words, if you place a file in your project directory, you can access it in the /vagrant directory of the Kubernetes nodes.

$ cp utility-v3.tar ../../ecomm-integration-ballerina/kubernetes-cluster/
$ cd ../../ecomm-integration-ballerina/kubernetes-cluster/
$ ls
LICENSE utility-v3.tar Vagrantfile
$ vagrant ssh k8s-node-1
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-134-generic x86_64)
vagrant@k8s-node-1:/vagrant$ pwd
/vagrant
vagrant@k8s-node-1:/vagrant$ ls
Vagrantfile utility-v3.tar

Use the following command in Kubernetes nodes to create a Docker image from the tar ball.

vagrant@k8s-node-1:/vagrant$ docker load -i utility-v3.tar
c9b26f41504c: Loading layer [==================================================>] 3.584 kB/3.584 kB
638d4576a926: Loading layer [==================================================>] 78.62 MB/78.62 MB
dd41350bacfe: Loading layer [==================================================>] 47.15 MB/47.15 MB
fe07b911d809: Loading layer [==================================================>] 27.65 kB/27.65 kB
Loaded image: utility:v3

Alright, our Docker image is available in Kubernetes nodes. Now docker pull can locally find our image.

vagrant@k8s-node-1:/vagrant$ docker images
REPOSITORY TAG IMAGE ID
utility v3 a73bbdde85e4

Let’s move on to deploying ballerina-generated Kubernetes artifacts in our Kubernetes cluster.

Deploying Kubernetes Deployment

Let’s deploy the ballerina-generated Kubernetes Deployment controller (utility_deployment.yaml) and verify everything is okay.

$ kubectl apply -f kubernetes/utility_deployment.yaml
deployment.extensions/utility-deployment created
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE
utility-deployment 1 1 1 1
$ kubectl get rs
NAME DESIRED CURRENT READY
utility-deployment-74d458c6bf 1 1 1
$ kubectl get pods
NAME READY STATUS RESTARTS
utility-deployment-74d458c6bf-lgpd2 1/1 Running 0

$ kubectl logs utility-deployment-74d458c6bf-lgpd2
ballerina: initiating service(s) in 'utility.balx'
ballerina: started HTTP/WS endpoint 0.0.0.0:8280

All good, our Ballerina microservice is up and running inside a Docker container in a Kubernetes Pod.

Let’s see a couple of different ways to access our Ballerina microservice running in Kubernetes cluster.

I’m trying all these different ways just for understanding some Kubernetes stuffs, you can just skip and go to ingress section directly.

Accessing Ballerina Microservice via PodIP

Let’s get the Pod IP using the following Kubectl command.

$ kubectl get pods -l app=utility -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
utility-deployment-74d458c6bf-lgpd2 1/1 Running 1 21h 172.16.2.4 k8s-node-2 <none>

172.16.2.4 is the Pod IP. You can also get this using the following command.

$ kubectl get pods -l app=utility -o yaml | grep podIP
cni.projectcalico.org/podIP: 172.16.2.4/32
podIP: 172.16.2.4

This IP address is only reachable from your Kubernetes nodes. Send a request to your service in this IP address from one of your Kubernetes nodes.

vagrant@k8s-node-2:~$ curl -i  http://172.16.2.4:8280/utility/hello
HTTP/1.1 200 OK
content-type: text/plain
content-length: 31
server: ballerina/0.981.1
date: Sun, 16 Sep 2018 23:58:11 GMT
Congratulations, let's dance!

Great, it works.

Hey, but Pods are mortal. They die due to various reasons and will be replaced by new identical Pods with different IPs. So we cannot really use Pod IP to call our service. This is the problem a Kubernetes Service solves.

A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.

There are many different ways (ClusterIP, NodePort, LoadBalancer, ExternalName and Ingress) you can expose Kubernetes Service. We will see couple of them in this blog.

Let’s create and deploy a Kubernetes Service to expose our Kubernetes Deployment.

Deploying Kubernetes Service

You’d have noticed earlier in this blog that ballerina build command has automatically generated the Service yaml for us (utility_svc.yaml).

Let’s deploy this.

$ kubectl apply -f kubernetes/utility_svc.yaml
service/utility created

Let’s verify.

$ kubectl get svc -l app=utility
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
utility NodePort 10.106.85.57 <none> 8280:30438/TCP

When you deploy a Kubernetes Service, Kubernetes system creates an Endpoint object behind the scenes and updates it with the Pod IPs. When Pod dies and new Pods are created, this Endpoint object will be updated with new IPs. Alright, essentially it is a mapping between Cluster IP and active Pods’ IPs. When you send a request to the Cluster IP, it will be load balanced to set of Pod IPs.

$ kubectl get ep utility
NAME ENDPOINTS AGE
utility 172.16.2.4:8280 12m

You can see that the Endpoint has the Pod IP we used a while ago in this blog to access our Ballerina service. If you scale your replicas, you would see new Pod IPs are being added to this Endpoint object. Let’s scale our deployment to 2 replicas and verify a new Pod is created and Endpoint is updated with its IP.

$ kubectl scale deployment utility-deployment --replicas=2
deployment.extensions/utility-deployment scaled
$ kubectl get pods -l app=utility
NAME READY STATUS RESTARTS
utility-deployment-74d458c6bf-gjkvq 1/1 Running 0
utility-deployment-74d458c6bf-lgpd2 1/1 Running 1
$ kubectl get ep utility
NAME ENDPOINTS AGE
utility 172.16.1.3:8280,172.16.2.4:8280 25m

Let’s move on and see how we can access our service using the Cluster IP of the Service.

Accessing Ballerina Microservice via ClusterIP

As we saw earlier, we can get the Cluster IP of our Kubernetes Service using following command.

$ kubectl get svc -l app=utility
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
utility NodePort 10.106.85.57 <none> 8280:30438/TCP

10.106.85.57 is the Cluster IP. This IP is reachable only from your Kubernetes nodes. Let’s send a request to our Ballerina service in this IP and verify.

vagrant@k8s-node-2:~$ curl -i http://10.106.85.57:8280/utility/hello
HTTP/1.1 200 OK
content-type: text/plain
content-length: 31
server: ballerina/0.981.1
date: Mon, 17 Sep 2018 00:37:00 GMT
Congratulations, let's dance!

Great, it works.

Hey, but we still haven’t exposed our Ballerina service outside of our Kubernetes cluster. Let’s move on and see a couple of ways to expose our Kubernetes Service outside the cluster.

Accessing Ballerina Microservice via NodePort

One of the way to expose your Kubernetes Service onto an external IP address is to use the NodePort service type. If you’d have noticed the ballerina-generated service definition (utility_svc.yaml), you would see I’ve already used NodePort as the service type. So if your Kubernetes nodes has public IPs, your Ballerina service is ready to serve traffic on the internet.

When you deploy a Kubernetes Service with NodePort service type, it exposes the service on each Kubernetes Node’s IP at a static port (this is called NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePortservice, from outside the cluster, by requesting <NodeIP>:<NodePort>.

When the client sends the request to NodeIP, it is routed to the Cluster IP and from there it is routed to one of Pods (using the Endpoint object mapping).

Let’s get the Node IP.

$ kubectl get nodes -o yaml | grep IP
projectcalico.org/IPv4Address: 192.168.205.10/24
type: InternalIP
projectcalico.org/IPv4Address: 192.168.205.11/24
type: InternalIP
projectcalico.org/IPv4Address: 192.168.205.12/24
type: InternalIP

Let’s get the NodePort of our Kubernetes Service.

$ kubectl get svc utility -o yaml | grep nodePort -C 5
uid: 7b9d09ea-ba0f-11e8-920c-021d76b33cfe
spec:
clusterIP: 10.106.85.57
externalTrafficPolicy: Cluster
ports:
- nodePort: 30438
port: 8280
protocol: TCP
targetPort: 8280
selector:
app: utility

Let’s send a request to NodeIP:NodePort and see if it works. You can pick any one of the Nodes.

$ curl -i 192.168.205.10:30438/utility/hello
HTTP/1.1 200 OK
content-type: text/plain
content-length: 31
server: ballerina/0.981.1
date: Mon, 17 Sep 2018 01:27:46 GMT
Congratulations, let's dance!

Great, it works perfect. We just accessed our Ballerina service from outside the Kubernetes cluster.

I’ll explore how we can expose our Service using LoadBalancer and ExternalName service types in another blog. Lastly let’s see how we can expose our Service using Ingress.

Accessing Ballerina Microservice via Ingress

Unlike all the above examples, Ingress is not a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster.

Most importantly, Ingress doesn’t eliminate the need for an external load balancer. You still need to expose your Ingress Controller deployment to the internet using an external load balancer.

Ingress resources don’t do anything by themselves: they are processed by ingress controllers, which vanilla Kubernetes does not provide by default. Managed Kubernetes providers may pre-install an appropriate ingress controller for you, e.g. Google Kubernetes Engine pre-installs a GCE ingress controller which provisions Google Cloud load balancers.

Let’s deploy Nginx Ingress Controller in our Kubernetes cluster.

$ kubectl apply -f https://raw.githubusercontent.com/ecomm-integration-ballerina/kubernetes-cluster/master/ingress/nginx-ingress.yaml
namespace/ingress-nginx created
deployment.extensions/default-http-backend created
service/default-http-backend created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.extensions/nginx-ingress-controller created

Let’s verify the deployment.

$ kubectl get pods -n ingress-nginx
NAME READY STATUS
default-http-backend-6586bc58b6-fqrzj 1/1 Running
nginx-ingress-controller-6bd7c597cb-4gvw4 1/1 Running

Nginx Ingress Controller and default backend pods are up and running. What is this default backend pod? Nginx requires us to setup a default backend service so that it can fall back when it can’t match any services for the incoming requests. Nginx enforces two requirements on this backend pod that it should return 200 on /healthz endpoint and 404 for any other endpoints. You can verify whether this behavior is provided by the default backend pod. Run following commands from the Kubernetes master node. We are using Cluster IP to access the service.

$ kubectl get svc default-http-backend -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.101.224.131 <none> 80/TCP 18m
$ curl -i http://10.101.224.131/healthz
HTTP/1.1 200 OK
Date: Mon, 17 Sep 2018 05:02:46 GMT
Content-Length: 2
Content-Type: text/plain; charset=utf-8
ok$ curl -i http://10.101.224.131/other
HTTP/1.1 404 Not Found
Date: Mon, 17 Sep 2018 05:02:50 GMT
Content-Length: 21
Content-Type: text/plain; charset=utf-8
default backend - 404

That works as expected.

Let’s get the IP of the Nginx Controller Pod.

$ kubectl get pods -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default-http-backend-6586bc58b6-fqrzj 1/1 Running 0 56m 172.16.1.5 k8s-node-1 <none>
nginx-ingress-controller-6bd7c597cb-4gvw4 1/1 Running 0 56m 172.16.1.4 k8s-node-1 <none>

172.16.1.4 is the Nginx Controller Pod IP, this is not reachable from outside the cluster. Anyway, let’s try to access our service via ingress on this IP. We first need to deploy an ingress resource to expose our deployment. Let’s deploy the ballerina-generated ingress resouce.

$ kubectl apply -f kubernetes/utility_ingress.yaml
ingress.extensions/utilityep-ingress created

Let’s verify.

vagrant@k8s-head:~$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
utilityep-ingress ballerina.gateway.com 80 23m

Let’s map this ballerina.gateway.com host entry to Nginx Controller Pod IP in /etc/hosts in one of the Kubernetes nodes.

vagrant@k8s-head:~$ cat /etc/hosts
127.0.0.1 localhost
172.16.1.4 ballerina.gateway.com

From the same Kubernetes node where you mapped /etc/hosts, Let’s send a request to our Ballerina service via ingress.

$ curl -i http://ballerina.gateway.com/utility/hello
HTTP/1.1 200 OK
Server: nginx/1.15.3
Date: Mon, 17 Sep 2018 05:47:26 GMT
Content-Type: text/plain
Content-Length: 31
Connection: keep-alive
Congratulations, let's dance!

Great, it works. We’ve just accessed our Ballerina service via ingress. But let’s see how we can access it from outside the cluster.

As I mentioned earlier, we need to expose our Nginx Ingress Controller deployment to the internet. I don’t have an external load balancer, so I’m going to expose this using NodePort service type. Let’s create and deploy a Kubernetes Service to do so.

$ kubectl apply -f https://raw.githubusercontent.com/ecomm-integration-ballerina/kubernetes-cluster/master/ingress/nginx-ingress-svc.yaml
service/nginx-ingress created

Let’s verify

$ kubectl get svc nginx-ingress -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx-ingress NodePort 10.102.89.81 <none> 80:30101/TCP,443:30216/TCP

Okay, so the NodePort 30101 is for HTTP and 30216 is for HTTPS. And we already know our Kubernetes node IPs (192.168.205.10, 192.168.205.11, 192.168.205.12).

Let’s map ballerina.gateway.com to one of these IP in our local machine.

Rajs-MacBook-Pro:ballerina-in-k8s raj$ cat /etc/hosts
127.0.0.1 localhost
192.168.205.10 ballerina.gateway.com

Let’s send a request from our local machine.

Rajs-MacBook-Pro:ballerina-in-k8s raj$ curl -i http://ballerina.gateway.com:30101/utility/hello
HTTP/1.1 200 OK
Server: nginx/1.15.3
Date: Mon, 17 Sep 2018 05:55:21 GMT
Content-Type: text/plain
Content-Length: 31
Connection: keep-alive
Congratulations, let's dance!

Awesome, we have just accessed our Ballerina service via ingress from outside our cluster.

References

  1. Kubernetes NodePort vs LoadBalancer vs Ingress?
  2. Studying the Kubernetes Ingress System

--

--

Rajkumar Rajaratnam

Drawing from a decade of hands-on experience developing large-scale Middleware applications – a journey through challenges, solutions, and valuable lessons.