Setting up a Local Kafka Environment in KRaft Mode with Docker-Compose and Bitnami Image, Enhanced by Provectus Kafka-UI

Tetiana
5 min readJun 13, 2023

--

With the upcoming Kafka 4.0, Zookeeper will be entirely phased out, and only KRaft mode will be supported. This significant change necessitates preparation on the part of both projects and developers.

As of now, the latest Kafka release is 3.4.1, and starting from 3.3.1 version released on October 3, 2022 KRaft is completely production ready.

According to the current release plan, Kafka 3.7 (due January 2024) will be the last release to support Zookeeper.

For a comprehensive understanding of KRaft, I recommend reading this article.

In this guide, we will prepare a Docker environment for a Kafka broker without authentication, providing comfortable UI access to our data.

Tools and links

Docker container from Bitnami https://hub.docker.com/r/bitnami/kafka/

Provectus Kafka UI: https://github.com/provectus/kafka-ui

Note: widely used https://hub.docker.com/r/wurstmeister/kafka supports only Zookeeper, while the Bitnami container can operate in both modes.

Step 1. Setup Kafka cluster on Windows laptop

During the environment setup, I encountered a couple of issues you might find useful.

Issue #1 :

The KAFKA_KRAFT_CLUSTER_ID does not need to be specified for a single broker, contrary to what some tutorials may suggest. Kafka will generate an ID on the fly and will post it in logs. If you’re setting up a cluster, take this ID from the logs and propagate it to all Kafka services in the docker-compose. All nodes in a single cluster should have the same KAFKA_KRAFT_CLUSTER_ID.

Issue #2 :

kafka_kafka_1 exited with code 1

silent kafka error

Kafka may fail silently without throwing an exception. In my case, I copied an incorrect environment config, and the only way to identify the real problem was to enable debug logging via BITNAMI_DEBUG=yes. This allowed me to find the root cause of my issue, so I recommend using this config for troubleshooting.

details of kafka failure

Final docker-compose

You can find docker-compose provided by Bitnami here, but it’s not particularly useful as you’ll need to search for the required configuration separately.

version: "3"
services:
kafka_b:
image: docker.io/bitnami/kafka:3.4
hostname: kafka_b
ports:
- "9092:9092"
- "9094:9094"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092,EXTERNAL://kafka_b:9094
- KAFKA_BROKER_ID=1
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_NODE_ID=1
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
- BITNAMI_DEBUG=yes
- KAFKA_CFG_NUM_PARTITIONS=2
volumes:
kafka_data:
driver: local

You can switch KRaft/Zookeeper mode using the KAFKA_ENABLE_KRAFT variable.

Be cautious with KAFKA_AUTO_CREATE_TOPICS_ENABLE, as you might inadvertently copy an incorrect value from a tutorial. The KAFKA_CFG_LISTENERS, KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP, and KAFKA_CFG_ADVERTISED_LISTENERS are configured in such a way that you can connect to Kafka outside of the Docker container via the EXTERNAL listener, with host kafka_b and port 9094. Ensure that the hostname in the config matches the advertised listeners.

Step 2. Setup Provectus Kafka UI

I tested several UIs (Conduktor, Kafka Magic, Offset Explorer, etc.), and found that they can behave differently when you need to connect to your cluster by an external name. This depends on your deployment strategy. For instance, with Kafka Magic, I was able to connect to localhost (127.0.0.1) directly even without an external listener (because this UI is not deployed via docker), but I do not use this tool as many useful features are only available under the license.

If you’re looking for a free open-source tool, you might want to check out Provectus kafka-ui. However, the approach with localhost will not work here.

To get started with kafka-ui, visit https://docs.kafka-ui.provectus.io/configuration/quick-start.There’s a trick with this tool: by default, you cannot change cluster configuration in real time. To do this, you need to set up the environment variable DYNAMIC_CONFIG_ENABLED. Even with this variable, my button for cluster configuration was disabled, so I directly navigated to http://localhost:8080/ui/clusters/create-new-cluster.

For a successful connection to our local Kafka, we need to specify the correct hostname and port, as configured in our docker-compose for Kafka itself.

Also, the docker-compose provided in the kafka-ui tutorial doesn’t work well for Windows, as the volume source should be specified with an absolute path and it should exist. Otherwise, you will encounter additional errors.

Here’s my final docker-compose for kafka-ui:

services:
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080
environment:
DYNAMIC_CONFIG_ENABLED: 'true'
LOGGING_LEVEL_ROOT: 'DEBUG'
volumes:
- /c/tools/kafka/kui/config.yml:/etc/kafkaui/dynamic_config.yaml

LOGGING_LEVEL_ROOT is added for troubleshooting and is not required at all.

Step 3. Verify Kafka connection from a sample app

I’m providing a small Java code snippet to verify the connection to Kafka. To connect to it without Zookeeper, we need to specify the bootstrap.servers property.


import java.util.Properties;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.serialization.StringSerializer;

@Slf4j
public class ProducerDemoWithCallback {

public static void main(String[] args) {
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("key.serializer", StringSerializer.class);
properties.put("value.serializer", StringSerializer.class);

KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(properties);
ProducerRecord<String, String> producerRecord = new ProducerRecord<>("demo", "Hello world");

kafkaProducer.send(producerRecord, new Callback() {
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if(e != null){
return;
}
log.info("Topic {}", recordMetadata.topic());
log.info("Offset {}", recordMetadata.offset());
log.info("Partition {}", recordMetadata.partition());
log.info("Timestamp {}", recordMetadata.timestamp());
}
});
kafkaProducer.flush();

}
}

The application logs show successful results from the callback:

23:09:53.858 [main] INFO org.apache.kafka.common.utils.AppInfoParser -- Kafka version: 3.4.1
23:09:53.861 [main] INFO org.apache.kafka.common.utils.AppInfoParser -- Kafka commitId: 8a516edc2755df89
23:09:53.861 [main] INFO org.apache.kafka.common.utils.AppInfoParser -- Kafka startTimeMs: 1686600593856
23:09:54.141 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata -- [Producer clientId=producer-1] Resetting the last seen epoch of partition demo-1 to 6 since the associated topicId changed from null to Bk8vwbsrTWudlV36WNybSg
23:09:54.142 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata -- [Producer clientId=producer-1] Resetting the last seen epoch of partition demo-0 to 6 since the associated topicId changed from null to Bk8vwbsrTWudlV36WNybSg
23:09:54.144 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata -- [Producer clientId=producer-1] Cluster ID: GYWfLbXgSYSvtSr8tcEdIw
23:09:54.170 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.producer.internals.TransactionManager -- [Producer clientId=producer-1] ProducerId set to 3000 with epoch 0
23:09:54.214 [kafka-producer-network-thread | producer-1] INFO com.backendthing.kafka.ProducerDemoWithCallback -- Topic demo
23:09:54.214 [kafka-producer-network-thread | producer-1] INFO com.backendthing.kafka.ProducerDemoWithCallback -- Offset 1
23:09:54.214 [kafka-producer-network-thread | producer-1] INFO com.backendthing.kafka.ProducerDemoWithCallback -- Partition 1
23:09:54.214 [kafka-producer-network-thread | producer-1] INFO com.backendthing.kafka.ProducerDemoWithCallback -- Timestamp 1686600594144

After that, we can see our messages in the corresponding topic in kafka-ui.

Congratulations! Your environment is now fully set up and ready to use.

--

--