Published in


Secure Kafka with Keycloak: SASL OAuth Bearer

This post will do a step-by-step configuration of the strimzi-operator (Apache Kafka) on Openshift. Expose an external listener on the Openshift platform as a route over TLS and Secure the Kafka Cluster using Keycloak using SASL OAuth Bearer.

Suppose you don’t want to do a bunch of configurations. An easy option for you would be Openshift Streams for Apache Kafka

Suppose you aren’t familiar with the concepts like SASL for OAuth. Let’s go over the details of how SASL OAuth Bearer works. [Simple Authentication and Security Layer (SASL) Mechanisms for OAuth]

OAuth 2.0 Protocol Flow


Running Apache Kafka deployed using the Strimzi Operator and Keycloak Operator on Openshift. Two Quarkus-based Clients: producer and consumer applications running on an external local system.

  • During the terraforming of the Kafka cluster, it will fetch the JWKs certificate from the Keycloak.
  • Configure a service account in Keycloak for the producer/consumer.
  • Using the service account sends a request to the Keycloak (OpenID-connect end-point) for fetching the token.
  • The producer/consumer sends the token with the message or consumes the message.
  • The broker verifies the token and, authenticates the user & allows them to produce/consume.

Installation: OLM: One-click install

  • Keycloak Operator
  • Strimzi Operator
git clone
cd strimzi-kafka-operator

Update the namespace in which you wanted to deploy the Operator.

sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml

Login & Create the namespace/project (kafka-demo)

oc login <cluster>
oc new-project <projectname> //kafka-demo as project name

Create the Cluster-Operator in the namespace: Kafka

kubectl apply -f install/cluster-operator -n kafka-demo

Check the status of the operator pod

oc get pods -n kafka-demo
git clone
cd keycloak-operator

Login & Create the namespace/project

oc login <cluster>
oc new-project <projectname> //kc
  • Run make cluster/prepare
  • Run kubectl apply -f deploy/operator.yaml
  • Run kubectl apply -f deploy/examples/keycloak/keycloak.yaml

Once Keycloak is up and running. Create a realm: demo

Create Realm: demo

Creating Kafka Cluster

Whether you have used OLM or followed manual steps, we can now create the Instance.

git clone kafka-sasl-oauth-keycloak/CR

Update the authentication spec in the Kafka Custom Resource. Open my-cluster.yaml in your preferred editor.

Configure the keycloak end-points:

  • Fetch the Keycloak route

export NAMEPSACE=<>

export KEYCLOAK_ROUTE=$(oc get route keycloak -n $NAMESPACE --template='{{ }}')
  • Fetch the keycloak route SSL certificate.
echo "" | openssl s_client -servername $KEYCLOAK_ROUTE -connect $KEYCLOAK_ROUTE:443 -prexit 2>/dev/null| openssl x509 -outform PEM > keycloak.crt
  • Create a secret with keycloak this certificate
oc create secret generic ca-keycloak  \
  • Create a truststore that can be used by the Producer/Consumer application
keytool -keystore keycloak.jks -alias root -import -file keycloak.crt -storepass password -noprompt

Authentication Listener Spec:

  • update <keycloak-host>
  • realm: demo
- name: external
port: 9094
tls: true
type: route
checkIssuer: true
jwksEndpointUri: >-
userNameClaim: preferred_username
checkAccessTokenType: true
accessTokenIsJwt: true
enableOauthBearer: true
validIssuerUri: >-
- certificate: keycloak.crt
secretName: ca-keycloak
type: oauth

Create the Kafka cluster in the same or different namespace. In this case, using namespace: kafka-demo

oc create -f keycloak-integrations/kafka-sasl-oauth-keycloak/CR/my-cluster.yaml

Accessing the Cluster using routes

$ oc get -n kafka-demo routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'//output:kafka broker bootstrap urleg: my-cluster-kafka-bootstrap-kafka-demo.<openshift-cluster-domain-url>

Fetch the ca-cert from the kafka-demo namespace

oc extract -n kafka-demo secret/my-cluster-cluster-ca-cert --keys=ca.crt

Create a truststore for the Producer/Consumer App

keytool -import -trustcacerts -alias root -file ca.crt -keystore truststore.jks -storepass password -noprompt
cd kafka-sasl-oauth-keycloak/producer orcd kafka-sasl-oauth-keycloak/consumer

Open both applications in your favorite editor as Maven-based projects.

Service Account in Keycloak

Create a service account in Keycloak: kafka-client-service-acc (any other name). Make sure to enable the flag: “Service Account Enabled: ON.”

Copy the ClientId and Credentials

Export properties or update values in the

export KEYCLOAK_HOST=https://<keycloak-host>
export KAFKA_BOOSTRAP_HOST=my-cluster-kafka-bootstrap-kafka-demo.apps.<domain>:443
export KEYCLOAK_CLIENT_ID=kafka-client-service-acc
export KEYCLOAK_REALM=demo
export KEYCLOAK_TRUSTSTORE=keycloak.jks
export KAFKA_TRUSTSTORE=truststore.jks
export KAFKA_TOPIC=new-topic

Run the producer & consumer app

./mvnw quarkus:dev

Congratulation!! If you can see the above output. Now, you can produce/consume messages secured by SASL over the Oauth mechanism.

Security Isolation Concern

Our setup works great. But any service account created in that Keycloak(realm: demo) can authenticate against your Kafka cluster.

How to restrict authentication for a set of service accounts?

A new spec called Custom Claim Check was recently added to the authentication listener spec in strimzi-operator. Which allows you to add a custom claim in Token. Which will be checked during the authentication flow.

eg, customClaimCheck: @.userId == '123'

Let’s update the CR and redeploy the cluster.

- authentication:
checkIssuer: true
jwksEndpointUri: >-
userNameClaim: preferred_username
clientId: kafka-broker
checkAccessTokenType: true
accessTokenIsJwt: true
checkAudience: false
enableOauthBearer: true
validIssuerUri: >-
- certificate: keycloak.crt
secretName: ca-truststore
type: oauth
key: kafka-broker
secretName: clientsecret
customClaimCheck: '''kafka-user'' in @.realm_access.roles'
name: external
port: 9094
tls: true
type: route

Now try to run the producer or consumer application. You will get this error:

org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed due to an invalid token: io.strimzi.kafka.oauth.validator.TokenValidationException: Token validation failed: Custom claim check failed

Authentication failed! For getting authentication. We need to add a realm role in our service account configuration.

Create a Role in the Keycloak(demo realm)

Now add the realm role to our service account.

Suppose you try to check the token claim. You can see the role: “kafka-user” added to the roles array in the realm_access object.

”realm_access”: {
“roles”: [“kafka-user”, “..”]

Custom claim check rule will be like: ‘kafka-user’ in @.realm_access.roles’

Let’s try to run the producer app again.

Great, we are now able to authenticate, and any other service account without that “kafka-user” role won’t be able to authenticate.


We managed to deploy a secure Kafka Cluster using Keycloak with the help of SASL over the Oauth mechanism. It's easier to control access using the Keycloak Admin interface. Also, addressed the advanced security challenges around restricting the set of service accounts to authenticate against a Kafka instance.

If you like this post, give it a Cheer!!!

Follow the Collection: Keycloak for learning more…

Happy Secure Coding ❤



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store