Installation of Strimzi Kafka on Kubernetes

--

Apache Kafka is an open-source distributed publish-subscribe messaging system for fault-tolerant real-time data feeds. Strimzi simplifies the process of running Apache Kafka in a Kubernetes cluster. Strimzi provides container images and Operators for running Kafka on Kubernetes. Strimzi Operators are fundamental to the running of Strimzi. The Operators provided with Strimzi are purpose-built with specialist operational knowledge to effectively manage Kafka.

Additional resources

A Kafka cluster comprises multiple brokers. Topics are used to receive and store data in a Kafka cluster. Topics are split by partitions, where the data is written. Partitions are replicated across topics for fault tolerance.

  • Kafka cluster of broker nodes
  • ZooKeeper cluster of replicated ZooKeeper instances
  • Kafka Connect cluster for external data connections
  • Kafka Exporter to extract additional Kafka metrics data for monitoring

Short Idea about Kafka-cluster component

Broker:- A broker, sometimes referred to as a server or node, orchestrates the storage and passing of messages.

Topic:- A topic provides a destination for the storage of data. Each topic is split into one or more partitions.

Cluster:- A group of broker instances.

Producer:- A producer sends messages to a broker topic to be written to the end offset of a partition. Messages are written to partitions by a producer on a round-robin basis, or to a specific partition based on the message key.

Consumer:- A consumer subscribes to a topic and reads messages according to topic, partition and offset.

Now start the installation process First let’s create kafka a namespace:

$ kubectl create namespace kafka

Deploy strimzi Kafka operator with resources including (deployment, config map, CRDs and More)

$ kubectl create -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml -n kafka

Create Kafka-cluster now. before making sure the operator is working fine.

$ vi kafka-cluster.yaml

Keep the below configuration in file and apply

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: kafka-cluster
namespace: kafka
labels:
strimzi.io/cluster: kafka-cluster
spec:
kafka:
config:
socket.send.buffer.bytes: 10240000
socket.receive.buffer.bytes: 10240000
socket.request.max.bytes: 100000012
auto.create.topics.enable: true
offsets.topic.replication.factor: 2
transaction.state.log.min.isr: 1
transaction.state.log.replication.factor: 1
listeners:
plain: {}
tls: {}
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
replicas: 3
template:
pod:
securityContext:
runAsUser: 0
fsGroup: 0
resources: {}
storage:
deleteClaim: true
size: 10Gi
type: persistent-claim
version: 2.4.1
zookeeper:
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
template:
pod:
securityContext:
runAsUser: 0
fsGroup: 0
replicas: 1
resources: {}
storage:
deleteClaim: true
size: 8Gi
type: persistent-claim
entityOperator:
topicOperator: {}
userOperator: {}

If you need External IP then you can add the below configuration in your file

   listeners:
plain: {}
tls: {}
external:
type: nodeport
tls: false
port: 9094

Note:- If you are using any cloud I will not recommend you expose it as a LoadBalancer otherwise they will charge you 30 $ or more per external IP.
“you can change the kafka version and number of brokers. I set three Kafka brokers and one zookeeper.”

$ kubectl create -f kafka-cluster.yaml -n kafka

Verify all the resources which are created by file kafka-cluster.yaml

Step 4:- Create a Topic

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: kafka-topic
namespace: kafka-strimzi
labels:
strimzi.io/cluster: "kafka-cluster"
spec:
partitions: 3
replicas: 1

Create a file and apply.

Step 5:- Create a Producer and Consumer

$ kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.19.0-kafka-2.4.1 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list kafka-cluster-kafka-bootstrap:9092 --topic kafka-topic

open another session and run Consumer

$ kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.19.0-kafka-2.4.1 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server kafka-cluster-kafka-bootstrap:9092 --topic kafka-topic --from-beginning
Producer and Consumer

you are strimzi Kafka set up ready and it's working fine.

Thanks for reading the blog please must try because “Practice make you perfect”

Don’t forget to give us a Clap and share with Others.

Buy Me a Coffee : — https://www.buymeacoffee.com/YAOL

Previous Blog:-
Installation of Apache Kafka on Ubuntu 16.04 , 18.04 and 20.04

Installation of Apache Kafka with SSL on Ubuntu 16.04 , 18.04 and 20.04

References:- https://strimzi.io/docs/operators/latest/overview.html
https://strimzi.io/docs/operators/0.19.0/deploying.html

--

--