Strimzi (Kafka): Openshift routes as an external listener
In this post, we will do a step-by-step configuration of the strimzi-operator & use Openshift routes as an external listener with security: SASL_SSL.
Setup
- CRC or Openshift Instance
- strimzi-operator
- Kafka CLI tools
- kubectl/oc tools
Installation:
OLM: One-click install
Locally
git clone https://github.com/strimzi/strimzi-kafka-operator.git
cd strimzi-kafka-operator
Update the namespace in which you wanted to deploy the Operator.
sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
Login & Create the namespace/project
oc login <cluster>
oc new-project <projectname> //kafka as project name
Create the Cluster-Operator in the namespace: kafka
kubectl apply -f install/cluster-operator -n kafka
Check the status of the operator pod
oc get pods -n kafka
Creating Kafka Cluster
Whether you have used OLM or followed manual steps, we can now create the Instance.
I am using one of the samples CR:
strimzi-kafka-operator/examples/security/scram-sha-512-auth
Update the Kafka CR with External Lister as route & authentication type: scram-sha-512.
kafka:
authorization:
type: simple
config:
log.message.format.version: '2.6'
offsets.topic.replication.factor: 3
transaction.state.log.min.isr: 2
transaction.state.log.replication.factor: 3
listeners:
external:
authentication:
type: scram-sha-512
type: route
plain: {}
tls: {}
Create the Instance
oc apply -f kafka.yaml
We can check the pods: Our cluster is running now
$ oc get pods -n kafka
my-cluster-kafka-0 1/1 Running 0
my-cluster-zookeeper-0 1/1 Running 0
Create a Kafka User:
cd strimzi-kafka-operator/examples/security/scram-sha-512-auth
oc apply -f user.yaml
with Spec:
authentication:type: scram-sha-512authorization:type: simple
Make sure all the pods are running without any issue.
Accessing the Cluster using routes
Get the bootstrap listener configured as Openshift route:
$ oc get -n kafka routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'\neg: my-cluster-kafka-bootstrap-kafka.<openshift-cluster-domain-url>
Fetch the ca-cert created for the cluster:
oc extract -n kafka secret/my-cluster-cluster-ca-cert — keys=ca.crt —-to=- > ca.crt
Create a trust store using the ca-cert
keytool -import -trustcacerts -alias root -file ca.crt -keystore truststore.jks -storepass password -noprompt
We can create a property file with all the config required for using Kafka clients. Use the base 64 encoded password string copied from the secrets.
ssl.truststore.location=./truststore.jks
ssl.truststore.password=password
ssl.keystore.type=JKS
security.protocol=SASL_SSL
bootstrap.servers=my-cluster-kafka-bootstrap-kafka.<cluster-domain-url>:443
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=my-user password=< base 64 pass:copy-password from the secrets>;
sasl.mechanism=SCRAM-SHA-512
Test the setup
$./bin/kafka-broker-api-versions.sh --bootstrap-server my-cluster-kafka-bootstrap-kafka.<cluster-url-domain>:443 --command-config config.txt// Responsemy-cluster-kafka-0-kafka.apps.<cluster-url-domain>:443 (id: 0 rack: null) -> (
Produce(0): 0 to 8 [usable: 8],
Fetch(1): 0 to 11 [usable: 11],
ListOffsets(2): 0 to 5 [usable: 5],
Metadata(3): 0 to 9 [usable: 9],
LeaderAndIsr(4): 0 to 4 [usable: 4],
...
)
We are good to Produce & Consume
$./bin/kafka-console-producer --broker-list my-cluster-kafka-bootstrap-kafka.<cluster-url-domain>:443 --producer.config config.txt --topic my-topic
>hello
Conclusion
We went through the process using the Openshift route as the listener to our Kafka cluster with SSL & username/password authentication mechanism.
Thank you for reading.
If you like this post, give a Cheer!!!
Happy Secure Coding ❤