Kafka on kubernetes: The infrastructure

Start of our tale

Kafka is gaining more popularity across the open source community and there are countless ways of running Kafka. Some like to run it on dedicated (virtual) machines, others prefer a PAAS solution. Lastly, you have a mix of solutions, this is were we will go exploring.

Technology stack

I knew I had to run Kafka, the goal was to find out if there are some drawbacks if I would run it on kubernetes.

The kubernetes cluster should run on Microsoft Azure so all code examples are going to reflect this. However, it is easily adapted to other cloud platforms.

I needed a devops approach to the infrastructure so I relied on terraform to set up the infrastructure. This will allow me to recreate certain aspects of the clusters if they get destroyed but will also guarantee that the result of the deployment will always be the same.

So to sum up: We need Kafka, zookeeper, kubernetes, Microsoft Azure, terraform and docker.

Later I will also explain how to write a python application to produce data but that’s for the next story.

Terraform

For those who don’t know what terraform is, you can find a lot of documentation here.

Basically terraform allows you to build, change and version infrastructure. It does this by keeping a state file on the infrastructure during deployment. If someone alters your environment, you can just redeploy and it will compare the deployed setup with the state file. If something is inconsistent, it will revert that object to the state as described in the state file.

So let’s get started by creating our infrastructure. In the repository you are developing, create a new folder terraform.

We start by telling terraform in which resource group it needs to deploy by creating an “azure.tf” file which holds the following configuration:

# Microsoft Azure Provider
provider "azurerm" {
version = "1.4.0"
}

# Source the resource group from Azure
data "azurerm_resource_group" "kafka_aks_res_group" {
name = "${var.resourceGroupName}"
}

Just as a good developer, we don’t hardcode values, so most values will be referenced with a variable.

Next we will need the definition of the kubernetes cluster. You can find the full configuration here. It is pretty much straightforward, I just replaced the hardcoded values with variables again . I just added a small snippet of the code to give you an idea.

resource "azurerm_kubernetes_cluster" "kube_cluster" {
...
agent_pool_profile {
name = "${var.PoolName}"
count = "${var.PoolNodeCount}"
vm_size = "${var.PoolVmSize}"
os_type = "Linux"
os_disk_size_gb = "${var.PoolDiskSizeGb}"
}
...

So all of your variables will probably give you an error if you are using an IDE which has terraform support. So let´s create a “vars.tf” file which holds the variables we define throughout the terraform code. Every variable you created should be listed here. But don’t worry if you missed one, terraform will bug you about it the moment you try to deploy.

variable "resourceGroupName" {
description = "The resource group name"
}

variable "adminUsersname" {
description = "Username of the administrator"
}

variable "azureClientId" {
description = "Client id for azure"
}

variable "sshKeyData" {
description = "ssh key to authenticate against the cluster"
}
...

So how do we pass values to the variables? We will create a small bash scripts which does nothing more than export the necessary values. These values are picked up by means of the naming convention (all variables start with TF_VAR_).

Create a file called “variables.dev.secret.sh” and add the variables you will need. A typical script might look like this:

export TF_VAR_adminUsersname=""
export TF_VAR_sshKeyData="ssh-rsa ..."
export TF_VAR_azureClientId=""
...

Kubernetes: manifest files in terraform

So we described our infrastructure, now it is time to use manifest files to describe how the kubernetes infrastructure should look like. Instead of starting from scratch I found a github project that had all the manifest files for a couple of cloud platforms. So it became a matter of correctly configuring them and finding out what I needed.

Before we dive deeper into the manifest file, I’ll quickly show you how you can execute these files as part of the terraform code.

Terraform has a plugin which is called null_resource, this allows you to define actions which are not bound to specific resources. Using the local-exec provisioner we can define actions which are executed locally on your machine.

So what does this means? The following snippet will give some clarification:

resource "null_resource" "kubernetes_resource" {
depends_on = [
"azurerm_kubernetes_cluster.kube_cluster",
]

#Get the credentials from the cluster and register to tmp file
provisioner "local-exec" {
command = <<EOT
sleep 10 && \
az aks get-credentials \
--resource-group ${data.azurerm_resource_group.kafka_aks_res_group.name} \
--name ${var.kubernetesClusterName} \
-a \
-f /tmp/${var.kubernetesClusterName}.conf
EOT
}

So we define a kubernetes resource which is depended on the creation of the cluster, then using local-exec we are able to execute commands. In order to do kubectl commands against our kubernetes cluster we first need to retrieve the credentials from our aks cluster, which we store in a local config file.

From here on we can just execute the kubectl commands (make sure that kubectl is installed on the machine you are executing this from)

provisioner "local-exec" {
command = <<EOT
export KUBECONFIG="/tmp/${var.kubernetesClusterName}.conf"
kubectl apply -f aks_manifests/storage/aks_storage_broker.yml && \
kubectl apply -f aks_manifests/storage/aks_storage_zookeeper.yml
EOT
}

The above snippet creates the storage for our aks cluster.

Kubernetes: Stateful Sets

One of the things Kafka needs is dedicated storage. In kubernetes, Kafka will be running inside a pod, but it might happen that your pod crashes. With a normal configuration you will loose the storage and this is something we need to avoid at all cost.

In order address the problem we need something called “StatefulSets”. By defining a persistent storage class and using StatefulSets, the pod will link to an outside dedicated storage disk. If the pod dies, kubernetes will spin up a new pod and automatically re-attach the disk to the booted pod.

How do we achieve this? First let’s take a look at the code for the storage:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name:
kafka-broker
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Retain
parameters:
kind: "
Managed"
storageaccounttype:
Standard_LRS

After we have defined the external disk we need to create the stateful set, the next snippet is part of the Kafka configuration:

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name:
kafka
namespace: kafka
...
volumeClaimTemplates:
- metadata:
name:
data
spec:
accessModes:
[ "ReadWriteOnce" ]
storageClassName: kafka-broker
resources:
requests:
storage:
200Gi

Note that the storageClassName references the storageClass.

Kubernetes: Docker images

The applications running inside a kubernetes clusters should be dockerised, so I ended up with 3 different images for the setup.

First, you will need zookeeper. As this is part of the Kafka distribution it made my life a bit easier, by creating a single image for both zookeeper and Kafka.

In retrospect this is not the most elegant solution (why install Kafka binaries if you only need zookeeper) but it is one less image. Since we are using alpine as the base image, the complete image was below 400MB, which was acceptable for me.

Secondly, I really wanted to include Kafka manager as it gives some good insights in the cluster ( don’t forget to enable jmx) but also gives some easy administrator functions. So one of the docker images builds Kafka manager from source and runs it as a service. However, there is something to be said on the way I did in the code.

I download the source and build it, which results in a new artefact. I then delete the source and use the artefact. However building the original source code is resource and time consuming, so if you want to avoid this it would be better to build the artefact somewhere else and export it to an external storage account. You could then import the zip file and just deploy it, it will result in a smaller image but also reduce the built time.

Lastly I used an init container. There are lots of reasons why you could use one, in this case it was mainly used to have certain tools in place like kubectl and netcat-openbsd.

The reason I needed netcat-openbsd was the following error I got when it was missing:

Readiness probe failed: nc: unrecognized option: q BusyBox v1.27.2 (2018-01-29 15:48:57 GMT) multi-call binary. Usage: nc [OPTIONS] HOST PORT - connect nc [OPTIONS] -l -p PORT [HOST] [PORT] - listen -e PROG    Run PROG after connect (must be last) -l    Listen mode, for inbound connects -lk    With -e, provides persistent server -p PORT    Local port -s ADDR    Local address -w SEC    Timeout for connects and final net reads -i SEC    Delay interval for lines sent -n    Don't do DNS resolution -u    UDP mode -v    Verbose -o FILE    Hex dump traffic -z    Zero-I/O mode (scanning)

So in the end my kubernetes Kafka configuration looked like this:

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name:
kafka
namespace: kafka
spec:
selector:
matchLabels:
app:
kafka
serviceName: "broker"
replicas:
3
updateStrategy:
type:
OnDelete
template:
metadata:
labels:
app:
kafka
annotations:
spec:
terminationGracePeriodSeconds:
30
initContainers:
- name: init-config
image: xxx.xxx.io/kafka_init_utils
...
containers:
- name: broker
image: xxx.xxx.io/kafka_alpine
env:
imagePullSecrets:
- name: "containerregistrysecret"

I uploaded my docker images to a private repo so I had to include the imagePullSecretes configuration. If you upload your docker images to a public repo, you can omit the part about “imagePullSecrets”

Running and interacting with the cluster

CLI

We have talked about all the components that are part of the solution, time to actually get this deployment going.

First make sure you have all of the following tools installed:

  • docker
  • terraform
  • kubectl
  • kubectx
  • az cli tool

First you need to build the docker files, upload them to a container registry and reference them correctly in your manifest files. I am assuming you know how to build docker files, so I’ll skip the actual build commands.

Before you start make sure that az cli has been installed ( if you are using a virtualenv don’t forget to source it). If you have 2 factor authentication enabled for your cluster, you need to run “az login” and authenticate before we can start.

Next we do the terraform deployment. So let’s run the following terraform commands :

terraform init
terraform apply

You might need to wait for a while before the cluster becomes active. If the deployment is finished you will see the following in the cluster:

Kubernetes cluster azure

In order to access the cluster you just created you will need to merge the credentials of the cluster into your local configuration file.

You can manage this by executing the following command:

az aks  get-credentials --resource-group="<resource group>" --name="<aks cluster name>"

Lastly we need to switch to the correct context ( in case you are running multiple kubernetes clusters):

Kubectx context switch

So now we can start interacting with our cluster. If you want to know if the pods are running, we can check this from the command line, just make sure that you specify the namespace (-n flag). If you forget to specify the namespace, it will go to the default namespace “default”, which is empty.

Overview running pods

Kubernetes UI

If you are not a fan of the command line you can always use the UI. First we start by proxying into the UI by issuing the command:

kubectl proxy
Starting to serve on 127.0.0.1:8001

So on this port several endpoints will be made available, one of them being the UI. So in order to attach to the UI you can visit the following url:

http://localhost:8001/ui

Don’t be alarmed if you get the following error

Kubernetes UI error

In order to fix this issue, remove the “s” from “https:”. Press enter and if all is well, you should be inside the UI.

Kubernetes UI

On the left side you will see several tabs you can choose from. In the namespace switch the value from “default” to “kafka”. You can now see the pods, how much cpu is being used by your cluster and so on.

Kafka Manager

One of the components of our deployment was Kafka manager, a UI which gives you some visibility into our Kafka cluster.

The UI has been configured as a service, but you probably want to access this from your local machine. So we need to port forward the service to a local port in order to access it from our local browser.

First we need to retrieve the pod names to find out what the exact name for our Kafka manager pod is. Once we have established this we can port forward the service. In our example we are mapping port 80 of the pod to our local port 8089 for namespace kafka

Kubectl port forwarding

If you now open your browser at “http://localhost:8089” the kafka manager will show up.

Now we want to add a cluster to the manager. From the “cluster” dropdown select “add cluster”. The most important part here is the cluster zookeeper hosts. This needs to match the zookeeper hostnames.

Adding cluster Kafka manager

If you want to get some additional statistics on the cluster you can enable the JMX polling, which enables certain graphs in the tools.

Some final words

Well it was fun doing this but I did ran into quite a few issues while doing this setup. While it is possible to run Kafka on kubernetes, I think some additional benchmarks are needed (like message throughput, disk throughput , disaster recovery testing) before I would feel comfortable switching to a kubernetes solution.

Also the current design leaves me with a couple of questions:

  • What is the best way to expose the brokers to external applications (because not all of your apps are running on the same cluster)
  • Should you foresee dedicated hardware in your kubernetes cluster for zookeeper and kafka? This would imply we need to introduce taint and tolerations

Definitely some additional exploring is needed but I hope this has started your journey.

As always, the final solution can be found on my github page.

Like what you read? Give Yves Callaert a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.