Enable External Access to Confluent Kafka on Kubernetes — Step by Step

Sumant Rana
The Startup
Published in
4 min readMay 9, 2020

This article describes the configuration required to be able to access Kafka on Kubernetes from outside the Kubernetes cluster.

Pre-requisites:

  • Kafka installed on GKE (or any other K8s cluster) via cp-helm-charts using installation name <installation-name>
  • The cp-helm-charts repository has been cloned locally and is available at the path <cp-helm-charts>.
  • Kafkakat or any other Kafka client is available for testing and validating the changes in configuration and access to Kafka on the cluster. Instructions for downloading and installing Kafkakat can be found here.

Default configuration:

If we install Kafka using the default configuration provided by cp-helm-charts and try to connect using Kafkakat, we would not find an entry point to the cluster. This is because by default there is no external IP Address enabled on which we can connect with the broker.

This is a sample output of the get svc command run on a cluster installed with Kafka:

The output of ‘kubectl get svc’ command on the default namespace

As we see here, all the default services are of type ClusterIP and External-IP is set to <none> in each, there is no way we can reach any of these services from outside the cluster.

Configuration Changes:

There are 2 configuration changes required in order to successfully connect to Kafka broker running inside the Kubernetes cluster. For the sake of explanation, I have done the changes in 2 steps but they can be done together in order to save time :

Enable Kafka Broker to be contacted from outside the cluster

  • Switch to the Kafka charts directory under the <cp-helm-charts> directory
cd <cp-helm-charts>/charts/cp-kafka
  • Edit the file values.yaml and change the value of nodeport -> enabled to true from original false.
nodeport:
enabled: true
  • Leave the other 2 properties servicePort and firstListenerPort as-is.
servicePort: 19092
firstListenerPort: 31090

Apply these changes to the existing Kafka installation on the cluster using helm upgrade

cd <cp-helm-charts>
helm upgrade <instllation-name> .

When applied successfully this configuration will create a node port service on the cluster that can be used to connect clients to the broker.

This is a sample output of the get svc command run on the same cluster after making this configuration change:

The output of ‘kubectl get svc’ command on the default namespace

As we see in the output, there is a new node port service. In order to connect to that service, we can use the external IP of any of the nodes that form the cluster. Use the following command to get the external IP of the nodes and choose any one IP address from the list.

kubectl get nodes -o wide

From now on, <ExternalIP> refers to the chosen external IP. If we try to connect to the broker using Kafkacat client on <ExternalIP>:31090, it will show the following output:

> kafkacat -L -b <ExternalIP>:31090
Metadata for all topics (from broker -1: <ExternalIP>:31090/bootstrap):
1 brokers:
broker 0 at <InternalIP>:31090 (controller)
49 topics:
.....(details of topics)

According to the output, everything seems to be correct and Kafka broker seems to be responding correctly with the details of the topics.

But, there is a catch here. The metadata that the broker is returning to the clients (which will be used for future communication) indicates an IP address that is internal to the cluster (because we have not yet configured external listeners).

broker 0 at <InternalIP>:31090 (controller)

so when the client sends a request to publish a message to the broker, it is not able to communicate with this internal IP address and publishing of message fails (even though we see a successful connection with the broker).

In order to fully enable external access, we need to make one more configuration change as described in the next step.

Configure Kafka Broker to return the correct metadata to enable further communication with the clients.

  • Switch to the Kafka charts directory under the <cp-helm-charts> directory
cd <cp-helm-charts>/charts/cp-kafka
  • Edit the file values.yaml and enable external listeners
"advertised.listeners": |-
EXTERNAL://<ExternalIP>:31090

Make sure the alignment of text is correct (A single space before EXTERNAL) otherwise the following error is shown when deploying the chart:

Error: error unpacking cp-kafka in cp-helm-charts: cannot load values.yaml: error converting YAML to JSON: yaml: line 52: could not find expected ‘:’

This will append an external listener to the list of internal listeners in the final configuration.

Apply these changes to the existing Kafka installation on the cluster using helm upgrade

cd <cp-helm-charts>
helm upgrade <instllation-name> .

If we now try to connect to the broker using Kafkacat client on <ExternalIP>:31090, it will show the following output:

> kafkacat -L -b 35.198.199.180:31090                                                                                                                                                                                                                                            
Metadata for all topics (from broker 0: <ExternalIP>:31090/0):
1 brokers:
broker 0 at <ExternalIP>:31090 (controller)
49 topics:
.....(details of topics)

If we see the output shows the controller address with an external IP rather than the initial internal IP.

Once done, the clients can not only successfully fetch metadata from the broker in the cluster, but they can also publish and subscribe to messages.

--

--