Kafka Multi-Tenancy Architecture: SSL client authentication

Itamar Yerushalmi
Talking Tech Around
4 min readNov 7, 2017

Kafka is a distributed streaming platform. As such, we would want many consumers and producers to write to the same Kafka cluster.

In one of our latest projects in a large scale cyber-security company, we created a large scale deployment of a Kafka cluster that needed to serve approximately 10TB of digested data each day, split across over 100 million events. This was done in order to limit each customer’s ability to access only their topic in a multi-tenancy architecture.

There are a few guides over the internet to enable SSL client authentication on Kafka. Some of them are outdated or incomplete, while the others include overkill Kerberos implementation. So I created this one, which is based on Kafka’s official documentation at this link, hopefully you’ll find it useful.

Producer can write to all topics. Consumer can access secured topic only with right certificate.

For this example, I have one Kafka server and 2 clients (consumers).

Server Side:

Make sure that you have truststore and keystore JKSs for each server.

  1. In case you want a self signed certificate, you can use the following commands:
export PASSWORD=passwordkeytool -keystore kafka.server.keystore.jks -alias localhost -validity 365 -genkeyopenssl req -new -x509 -keyout ca-key -out ca-cert -days 365keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-certkeytool -keystore kafka.client1.truststore.jks -alias CARoot -import -file ca-certkeytool -keystore kafka.client2.truststore.jks -alias CARoot -import -file ca-certkeytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file cert-fileopenssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:$PASSWORDkeytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-certkeytool -keystore kafka.server.keystore.jks -alias localhost -import -file cert-signedkeytool -keystore kafka.client1.keystore.jks -alias localhost -validity 365 -genkey

Make sure that you configure the certificate properly — this will be used later in the ACL configuration.

keytool -keystore kafka.client1.keystore.jks -alias localhost -certreq -file cert-fileopenssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:$PASSWORDkeytool -keystore kafka.client1.keystore.jks -alias CARoot -import -file ca-certkeytool -keystore kafka.client1.keystore.jks -alias localhost -import -file cert-signedkeytool -keystore kafka.client2.keystore.jks -alias localhost -validity 365 -genkeykeytool -keystore kafka.client2.keystore.jks -alias localhost -certreq -file cert-fileopenssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:$PASSWORDkeytool -keystore kafka.client2.keystore.jks -alias CARoot -import -file ca-certkeytool -keystore kafka.client2.keystore.jks -alias localhost -import -file cert-signed

2. On Kafka servers — server.properties — add the following lines:

ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1ssl.endpoint.identification.algorithm=HTTPSssl.keymanager.algorithm=SunX509ssl.keystore.location=/PATH-TO/kafka.server.keystore.jksssl.keystore.password=passwordssl.keystore.type=JKSssl.protocol=TLSssl.trustmanager.algorithm=PKIXssl.truststore.location=/PATH-TO/kafka.server.truststore.jksssl.truststore.password=passwordssl.truststore.type=JKSauthorizer.class.name=kafka.security.auth.SimpleAclAuthorizerallow.everyone.if.no.acl.found=true

3. Add the following to the advertised.listeners and the listeners keys SSL values on port 9092:

advertised.listeners=PLAINTEXT://KAFKA_IP:9092,SSL://KAFKA_IP:9093listeners=PLAINTEXT://KAFKA_IP:9092,SSL://KAFKA_IP:9093

4. Change

ssl.client.auth=none

To

ssl.client.auth=required

5. For logging — enable DEBUG on Kafka authentication by changing the line in log4j.properties —

From:

log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender

To:

log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender

6. On the client servers add the following lines to consumer.properties and producer.properties:

security.protocol=SSLssl.truststore.location=/PATH-TO/ssl/kafka.client1.truststore.jksssl.truststore.password=passwordssl.keystore.location=/PATH-TO/kafka.client1.keystore.jksssl.keystore.password=passwordssl.key.password=password

Notice that you are using different files for the different servers.

7. Produce messages to a new topic:

./bin/kafka-console-producer.sh — broker-list KAFKA_IP:9093 — topic secured-topic — producer.config /opt/kafka/config/producer.properties>1CTRL+D

You should now be able to write to the topic from both servers — if you look at the kafka-authorizer.log on the server you will see the following messages:

8. Using the kafka-acl command — set permissions for one of the users — by default, it will remove permissions from the other user:

bin/kafka-acls.sh — authorizer kafka.security.auth.SimpleAclAuthorizer — authorizer-properties zookeeper.connect=ZOOKEEPER_IP:2181 — add — allow-principal User:”CN=client1,OU=1,O=1,L=1,ST=1,C=1" — operation Write — operation Read — operation Describe — topic secured-topic

9. Now, if you try to write or read from the granted user, you will succeed —

and from the other user — permission denied.

--

--