Running Kafka, Producer, and Consumer on a local Kubernetes Cluster

Aris David
3 min readJun 7, 2020

In this tutorial, we’ll show how to deploy Kafka and Zookeeper with just a few command lines using a Bitnami helm-chart deployment. We will also implement a simple python producer and consumer using Kafka-Python. All four components will be installed and run inside a Kubernetes cluster.

Required Technologies

Desired Architecture

The goal is to have Kafka, Zookeeper, Producer, and Consumer pods running in the same namespace.

TL;DR

Install Zookeeper and Kafka into our local Kubernetes cluster

  1. Create a namespace:
kubectl create namespace kafkaplaypen

2. Set the namespace context to the namespace created in the previous step:

kubectl config set-context $(kubectl config current-context) --namespace=kafkaplaypen

3. Go to Bitnami Helm Charts repository and read the README file for advanced configurations: https://github.com/bitnami/charts/tree/master/bitnami/kafka

4. Install Kafka and Zookeeper by running the commands:

helm repo add bitnami https://charts.bitnami.com/bitnamihelm install kafka-local bitnami/kafka --set persistence.enabled=false,zookeeper.persistence.enabled=false

Take note of the Kafka DNS as we will use this to connect the producer and consumer apps which you’ll see later:

kafka-local.kafkaplaypen.svc.cluster.local:9092

5. Get all pods in the namespace kafkaplaypen. There should be 2 pods deployed:

  • kafka-local-0
  • kafka-local-zookeeper-0
kubectl get podsNAME                      READY   STATUS              RESTARTS   AGE
kafka-local-0 0/1 ContainerCreating 0 20s
kafka-local-zookeeper-0 0/1 ContainerCreating 0 20s

6. Wait until they get to “Running” status.

kubectl get pods -wNAME                      READY   STATUS    RESTARTS   AGE
kafka-local-0 1/1 Running 0 2m7s
kafka-local-zookeeper-0 1/1 Running 0 2m7s

7. With just a few command lines, we are able to deploy Kafka inside Kubernetes in the namespace kafkaplaypen.

We now have running instances of the Kafka ecosystem with Zookeeper and Kafka broker.

Implement a Producer

Let’s create a simple producer app that generates a random first name, last name, and email address every 5 seconds and pushes the data into a Kafka-topic called my-topic.

Build the producer docker image

docker build -t producer:latest -f producer/Dockerfile .

Implement a Consumer

docker build -t consumer:latest -f consumer/Dockerfile .

Install Consumer and Producer into our local Kubernetes cluster

Install the producer application first:

kubectl run producer --rm --tty -i --image producer:latest --image-pull-policy Never --restart Never --namespace kafkaplaypen --command -- python3 -u ./producer.py

Install the consumer application next:

kubectl run consumer --rm --tty -i --image consumer:latest --image-pull-policy Never --restart Never --namespace kafkaplaypen --command -- python3 -u ./consumer.py

Check the logs

Check the logs to confirm that the producer publishes data into my-topic and the consumer consumes data from a my-topic.

Optional Kubernetes UI

This only works with k9s installed.

Inspect the pods from the namespace kafkaplaypen. Assuming all steps have been followed correctly, you should see Kafka, Zookeeper, consumer, and producer pods in running state.

Conclusion

In this tutorial, we launched Kafka and Zookeeper using Bitnami helm-chart deployment. The namespace used is kafkaplayen inside our local Kubernetes cluster.

We have implemented a simple producer that generates random first_name, last_name, and email data and publishes it into my-topic every 5 seconds. We have implemented a consumer that consumes data published from my-topic.

We installed the producer and consumer apps in the same namespace as Kafka and Zookeeper.

--

--

Aris David

I am a Software Engineer currently developing microservices and scalable solutions on cloud infrastructure and Kubernetes cluster.