WTF: Setting up Kafka Cluster using Docker Stack

Prateek
4 min readApr 15, 2018

--

In the previous story Setting up Kafka Cluster using Docker Swam , we went through the journey of setting up a Kafka Cluster using Docker Swarm. That was most of the heavy lifting done there. We will do the same exercise using Docker Stack

Docker Stack provides a configure docker swarm using the docker-compose files. You do not have to worry about issuing the individual docker swarm commands. Docker stack provides functionality over docker swarm the same way docker compose provides over core docker commands.

What is setup already?

The details of the setup are provide in the previous story.

  • 4 VM nodes: node1, node3, node4 and node5.
  • 3 node swarm cluster made from node3, node4 and node5.
  • node3 is swarm manager
  • node4 and node5 are swarm workers
  • node3 is labeled as zoo=1 and kafka=1
  • node4 is labeled as kafka=2
  • node5 is labeled as kafka=3
  • Docker image (named kafka) with Kafka and Zookeeper

Expected Kafka Cluster Setup

  • 3 broker Kafka cluster
  • 1 node Zookeeper ensemble
  • Producers and Consumer be able to connect from outside swarm nodes

Docker Compose file for Docker Stack

We can setup the docker compose file with correct services.

version: '3'
services:
zookeeper:
image: kafka:latest
volumes:
- zoo-stack-data:/tmp/zookeeper
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 2181:2181
networks:
- kafka-net
deploy:
mode: global
placement:
constraints:
- node.labels.zoo==1
command: /kafka/bin/zookeeper-server-start.sh /kafka/config/zookeeper.properties

kafka1:
image: kafka:latest
volumes:
- kafka-stack-1-logs:/tmp/kafka-logs
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 9093:9093
networks:
- kafka-net
deploy:
mode: global
placement:
constraints:
- node.labels.kafka==1
command: /kafka/bin/kafka-server-start.sh /kafka/config/server.properties --override zookeeper.connect=zookeeper:2181 --override listeners=INT://:9092,EXT://0.0.0.0:9093 --override listener.security.protocol.map=INT:PLAINTEXT,EXT:PLAINTEXT --override inter.broker.listener.name=INT --override advertised.listeners=INT://:9092,EXT://node3:9093 --override broker.id=1

kafka2:
image: kafka:latest
volumes:
- kafka-stack-2-logs:/tmp/kafka-logs
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 9094:9094
networks:
- kafka-net
deploy:
mode: global
placement:
constraints:
- node.labels.kafka==2
command: /kafka/bin/kafka-server-start.sh /kafka/config/server.properties --override zookeeper.connect=zookeeper:2181 --override listeners=INT://:9092,EXT://0.0.0.0:9094 --override listener.security.protocol.map=INT:PLAINTEXT,EXT:PLAINTEXT --override inter.broker.listener.name=INT --override advertised.listeners=INT://:9092,EXT://node4:9094 --override broker.id=2

kafka3:
image: kafka:latest
volumes:
- kafka-stack-3-logs:/tmp/kafka-logs
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 9095:9095
networks:
- kafka-net
deploy:
mode: global
placement:
constraints:
- node.labels.kafka==3
command: /kafka/bin/kafka-server-start.sh /kafka/config/server.properties --override zookeeper.connect=zookeeper:2181 --override listeners=INT://:9092,EXT://0.0.0.0:9095 --override listener.security.protocol.map=INT:PLAINTEXT,EXT:PLAINTEXT --override inter.broker.listener.name=INT --override advertised.listeners=INT://:9092,EXT://node5:9095 --override broker.id=3

networks:
kafka-net:

volumes:
kafka-stack-1-logs:
kafka-stack-2-logs:
kafka-stack-3-logs:
zoo-stack-data:

Let’s break down this config to understand

  • Step 1: Define the network
# Define an overlay network 
networks:
kafka-net:
  • Step 2: Define the volumes needed
# Define the volumes
volumes:
# volume for kafka1 service
kafka-stack-1-logs:

# volume for kafka2 service
kafka-stack-2-logs:
# volume for kafka3 service
kafka-stack-3-logs:
# volume for zookeeper service
zoo-stack-data:
  • Step 3: Define the zookeeper service
# Service definition for zookeeper
zookeeper:

# The docker image which contains kafka and zookeeper
image: kafka:latest

# volume which maps to /tmp/zookeeper
# /tmp/zookeeper is defined in the zookeeper.properties
volumes:
- zoo-stack-data:/tmp/zookeeper

# Expose port to create topic from non-swarm node
ports:
- 2181:2181

# Attach to the network created
networks:
- kafka-net

# instructions for deploying
deploy:
#global mode
mode: global

# deploy in node with label zoo=1 (node3)
placement:
constraints:
- node.labels.zoo==1

# command to run
command: /kafka/bin/zookeeper-server-start.sh /kafka/config/zookeeper.properties
  • Step 4 : Define the kafka1 service
# Service name kafka1
kafka1:
# The docker image
image: kafka:latest

# Volume to be mapped to /tmp/kafka-logs
# /tmp/kafka_logs is defined in server.properties
volumes:
- kafka-stack-1-logs:/tmp/kafka-logs

# Expose external listener port 9093
ports:
- 9093:9093

# Attach to the netowkr
networks:
- kafka-net

# deployment instructions
deploy:
# global mode
mode: global
# deploy into node with label kafka=1 i.e. node3
placement:
constraints:
- node.labels.kafka==1
# command to run the broker.
# --override overrides properties in server.properties
# Properties that are overriden
# zookeeper.connect=zookeeper:2181
# The zookeeper is resolved to zookeeper service
# listeners=INT://:9092,EXT://0.0.0.0:9093
# listeners INT and EXT are defined
# listener INT is for inter-broker communication.
# INT://:9092 resolves to docker hostname
# listener EXT is used by producer and consumer
# EXT://0.0.0.0:9093 listens on all IPs

# listener.security.protocol.map=INT:PLAINTEXT,EXT:PLAINTEXT
# INT and EXT communicate through PLAINTEXT
# inter.broker.listener.name=INT
# INT listener is used for inter-broker communication
# advertised.listeners=INT://:9092,EXT://node3:9093
# These are advertise for kafka client
# INT://:9092
# - used for inter-broker communication
# - resolved to docker container name
# - all docker services can resolve container hostname
# EXT://node3:9093
# - used by producer and consumer
# - resolved to host where container is deployed
# - the docker host's name is available to other nodes
# broker.id=1
# This sets the broker id
command: /kafka/bin/kafka-server-start.sh /kafka/config/server.properties --override zookeeper.connect=zookeeper:2181 --override listeners=INT://:9092,EXT://0.0.0.0:9093 --override listener.security.protocol.map=INT:PLAINTEXT,EXT:PLAINTEXT --override inter.broker.listener.name=INT --override advertised.listeners=INT://:9092,EXT://node3:9093 --override broker.id=1
  • Step 5 : Define the service kafka2

This is similar to kafka1. It is deployed in node4. The listener listens on port 9094 which is exposed on the host node4.

  • Step 6 : Define the service kafka3

This is similar to kafka1. It is deployed in node5. The listener listens on port 9094 which is exposed on the host node5.

  • Step 7: Create producer and consumer from outside docker swarm nodes (node1)

The steps are provided in the previous story.

--

--