Apache Kafka Security with Kerberos on Kubernetes

Xenonstack
XenonStack Security
3 min readMay 10, 2019

Apache Kafka Architecture Overview

Apache Kafka is an open-source stream processing platform for the software, written in JAVA and SCALA which is initially developed by LinkedIn and then was donated to the Apache Software Foundation. Kafka is a public subscribe scalable messaging system and fault-tolerant that helps us to establish distributed applications. Due to fast, scalable and fault-tolerant feature of the Kafka, it is used in a messaging system where even JMS, RabbitMQ, and AMQP are not even considered. It is also helpful in tracking service calls or in the tracking of the IoT sensor devices due to its higher throughput and more reliability. This article will give an overview of Apache Kafka Security with ACL and Apache Kerberos.

The streaming of data from one system to another in real-time is done by Kafka. It acts as a kind of middle layer to decouple between real-time data pipelines. It can also be used to feed the fast lane systems like Spark, Apache Storm and other CEP systems. This data can be often performed with following functions -

  • Data analysis
  • Reporting
  • Data science Crunching
  • Compliance Auditing
  • Backups

Apache Kafka Security with Kerberos

Supposing that we have installed kerberos setup and we have to use that setup for the authentication

First, we have to create the principles by using the following commands -

sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}' sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"

Now we have to configure kafka kerberos, we will be adding JAAS file and will be naming it as -

kafka_server_jaas.conf KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka_server.keytab" principal="kafka/kafka1.hostname.com@EXAMPLE.COM"; }; // Zookeeper client authentication Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka_server.keytab" principal="kafka/kafka1.hostname.com@EXAMPLE.COM"; };

Now we will pass the Krb5 location to each of kafka brokers as -

-Djava.security.krb5.conf=/etc/kafka/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.co

We have to make sure that the keytabs that are configured in the JAAS file are readable by the user who is starting Kafka broker.

Now configure the SASL ports as -

listeners=SASL_PLAINTEXT://host.name:port security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=GSSAPI sasl.enabled.mechanisms=GSS

The principal name of the Kafka brokers must match the service name in-service properties. By checking the principal name, we can write it as -

sasl.kerberos.service.name=kafka

Now configure Kafka clients.

The configuration of the client using the keytab will be as follow -

sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ useKeyTab=true \ storeKey=true \ keyTab="/etc/security/keytabs/kafka_client.keytab" \ principal="kafka-client-1@EXAMPLE.COM";

We have to make sure that the keytabs that are configured in the JAAS file is readable by the user who is starting Kafka broker.

Now pass the Krb5 file location to each client as -

-Djava.security.krb5.conf=/etc/kafka/krb5.conf

At last configure the following properties in producer or consumer.properties as

security.protocol=SASL_PLAINTEXT (or SASL_SSL) sasl.mechanism=GSSAPI sasl.kerberos.service.name=kafka

Apache Kafka Security with ACL

The general format of “Principal P is [Allowed/Denied] Operation O From Host, H On Resource R” is defined in Kafka Acl’s. In a secure cluster, both client requests and inter-broker operations require authorization.

Now in Server. Properties enable the default authorizer by -

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

Now we will be setting up the broker principle as superuser to give them required access to perform operations.

super.users=User:Bob;User:Alice

Tls user name by default will be -

"CN=host1.example.com,OU=,O=Confluent,L=London,ST=London,C=GB"

It can be customized in server.properties as by following -

principal.builder.class=CustomizedPrincipalBuilderClass

Now In order to add User: Bob as a producer of Test-topic we can execute the following -

kafka-acls -authorizer-properties zookeeper.connect=localhost:2181 \ -add -allow-principal User:Bob \ -producer -topic Test-topic

In order to give access to the newly created topic, we can authorize it as -

export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf" kafka-acls -authorizer-properties zookeeper.connect=kafka.example.com:2181 \ -add -allow-principal User:kafkaclient \ -producer -topic securing-kafka kafka-acls -authorizer-properties zookeeper.connect=kafka.example.com:2181 \ -add -allow-principal User:kafkaclient \ -consumer -topic securing-kafka -group securing-kafka-group

Stream Processing Approach

To know more about Stream Processing platform we would recommend you to give a read to our article “Stream Processing with Apache Flink

To understand more about our Stream and Real-Time Processing Solutions for Enterprises. Get in Touch with us.

Originally published at https://www.xenonstack.com on May 10, 2019.

--

--

Xenonstack
XenonStack Security

A Product Engineering and Technology Services company provides Digital enterprise services and solutions with DevOps , Big Data Analytics , Data Science and AI