Deploying Strimzi Kafka and Java clients with security - Part 1: Authentication

Laurent Broudoux
5 min readOct 18, 2021

--

Strimzi.io provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. Strimzi does a great job making hard configuration tasks easy but securing a broker can still be complicated from a developer perspective.

In this blog series, we will detail the configuration elements of secured Kafka deployment options with Strimzi. We’ll also go through the configuration of Java clients using popular frameworks (namely Spring Boot and Quarkus).

SCRAM-SHA and MTLS secured Strimzi clusters on OpenShift

As Strimzi supports 3 authentication options that are SCRAM-SHA-512, Mutual TLS and OAuth, we will only cover SCRAM-SHA and Mutual TLS in this post. Also, be sure to stay tuned on second part of this series that will cover Authorization topic.

Let’s go!

SCRAM-SHA Authentication

Once the Strimzi Operator is installed on your Kubernetes cluster, you should have access to the Kafka custom resource. Kafka resource allows you to configure a cluster deployment. We’ll create a scram-cluster using the default values excepting for the listeners specification where we’ll add a secured listener with scram-sha-512 authentication type like below:

Note that we’re using a listener having the route type because we’re deploying on OpenShift. Choose ingress instead on a vanilla Kubernetes and check the associated doc regarding TLS passthrough.

After some minutes, you should have a running Kafka cluster in your namespace —we have used kafka-test in our case. You should be able now to extract the different required security elements for accessing the cluster. Let’s run the following command to get cluster certificate, pkcs12 truststore and associated password:

$ kubectl get secret scram-cluster-cluster-ca-cert -n kafka-test -o jsonpath='{.data.ca\.crt}' | base64 -d > scram-cluster-ca.crt$ kubectl get secret scram-cluster-cluster-ca-cert -n kafka-test -o jsonpath='{.data.ca\.p12}' | base64 -d > scram-cluster-ca.p12$ kubectl get secret scram-cluster-cluster-ca-cert -n kafka-test -o jsonpath='{.data.ca\.password}' | base64 -d
---- OUTPUT ----
LoUk0HtOd8tD

The next step is to create a KafkaUser so that we will be able to authenticate using its credentials. Strimzi allows to specify already existing credentials using a Secret but it can also generates one for you. Let’s do this creating a scram-user that will be attached to our scram-cluster in the same namespace. We ensure that authentication type is scram-sha-512 like below:

After some seconds, the Strimzi cluster operator should have created a specific Secret you can extract credentials from. Just use the following commands:

$ kubectl get secret scram-user -n kafka-test -o jsonpath='{.data.password}' | base64 -d
---- OUTPUT ----
tDtDCT3pYKE5
$ kubectl get secret scram-user -n kafka-test -o jsonpath='{.data.sasl\.jaas\.config}' | base64 -d
---- OUTPUT ----
org.apache.kafka.common.security.scram.ScramLoginModule required username="scram-user" password="tDtDCT3pYKE5";

We are now OK on the cluster side, let’s switch to the client configuration side! First Java client is using Spring Boot framework for publishing messages on the Kafka broker. It is using the spring-kafka library and while there’s plenty of code and configuration samples, they are including messy small variations… Here’s below my configuration reference reusing the previously extracted values:

On the Quarkus client, we love diversity 😉 We have used both the bare Kafka client and the Reactive Message client. Properties for the bare Kafka client must be configured with a kafka prefix ; properties for the reactive messaging client should must be configured with a mp.messaging.<direction>.<channel-name> prefix. Here below, we are listening incoming message on a microcks-services-updates topic and are publishing on different topics using the bare client:

That’s it for SCRAM-SHA authentication.

MTLS Authentication

Let’s do similar things for Mutual TLS authentication. First, create a new mtls-cluster still using the Kafka custom resource ; this time specifying a tls authentication type with the external listener definition:

Note that we’re using a listener having the route type because we’re deploying on OpenShift. Choose ingress instead on a vanilla Kubernetes and check the associated doc regarding TLS passthrough.

After some minutes, you should have a running Kafka cluster in your namespace — we have used kafka-test in our case. You should be able now to extract the different required security elements for accessing the cluster. Let’s run the following command to get cluster certificate, pkcs12 truststore and associated password:

$ kubectl get secret mtls-cluster-cluster-ca-cert -n kafka-test -o jsonpath='{.data.ca\.crt}' | base64 -d > mtls-cluster-ca.crt$ kubectl get secret mtls-cluster-cluster-ca-cert -n kafka-test -o jsonpath='{.data.ca\.p12}' | base64 -d > mtls-cluster-ca.p12$ kubectl get secret mtls-cluster-cluster-ca-cert -n kafka-test -o jsonpath='{.data.ca\.password}' | base64 -d
---- OUTPUT ----
sHgN5VLVJCzU

The next step is to create a KafkaUser. This time again, create a mtls-user that will be attached to our mtls-cluster in the same namespace. We ensure that authentication type is tls like below:

After some seconds, the Strimzi cluster operator should have created a specific Secret you can extract credentials from. The operator has especially created a pkcs12 keystore holding the client private key for us. Just use the following commands:

$ kubectl get secret mtls-user -n kafka-test -o jsonpath='{.data.user\.p12}' | base64 -d > mtls-user.p12$ kubectl get secret mtls-user -n kafka-test -o jsonpath='{.data.user\.password}' | base64 -d
---- OUTPUT ----
timpMsibd2rl

We are now OK on the cluster side, let’s configure our Spring Boot client. This time we will have to specify a keystore in addition to the truststore already present for holding the TLS transport certificate. We have to put the passwords accordingly:

Doing the same thing on the Quarkus side but now twice because we have to do this for the bare Kafka client and the Reactive Message client. Check our reference configuration below with the values that have been extracted above:

Wrap-up

We saw in this blog post how to configured secured Kafka clusters on Kubernetes using Strimzi Operator and custom resources. We also learn how to configure Java clients that are using the two popular frameworks: Spring Boot and Quarkus. This post is task-oriented whereas Strimzi documentation is featured-oriented and sometimes harder to grasp for developers.

Thanks for the read and stay tuned!

--

--

Laurent Broudoux

Coding architect, geek, committed to open source, @MicrocksIO founder, ex-Red Hatter. #distributed, #architecture, #eventdriven, #java, #api