Let’s secure our Kafka component in the Kafka cluster with TLS!

Natthanan Bhukan
CJ Express Tech (TILDI)
14 min readMar 21, 2023
Background image

Hi, my name is Tae, and I am a Machine Learning Engineer (MLE) from CJ Express Tech (TILDI). I wrote this article to share knowledge about securing Kafka clusters with TLS. P’ Mil Data Engineer from my team introduced Securing Kafka with TLS/SSL authentication, which gave a lot of ideas about TLS/SSL and why we should use it. As a result, this article will extend the detail about not using ANONYMOUS as a superuser to secure Kafka components such as Kafka UI, Kafka Connect, and Kafka Connector.

Table of content

  • How do Microservices secure communication with mTLS?
  • Setup demo.
  • How to config Kafka UI to have a secure connection?
  • How to config Kafka connect to have a secure connection?
  • How to config the Schema registry to have a secure connection
  • Summary.

How do Microservices secure communication with mTLS?

In general, most of the software architecture today uses microservice architecture for better performance. However, this architecture produces several problems, and the most common problem that software engineers always face is security. As a result, we use the TLS/SSL protocol to secure the communication between the services, but it is not just the TLS/SSL protocol; it is a mTLS, or mutual TLS.

What Is mTLS? (F5.com)

mTLS is an extension implementation of TLS/SSL that identifies the client, not just the server. Since this concept is based on zero-trust security, the service must validate and identify both the server and the client to ensure the request is from an authorized client. As a consequence, we can control the damage from the service when this service is exploited and will not affect the service that is not associated with this service.

Setup demo

This article will show a simple setup for Kafka connect to transfer data from the database into object storage by using MySQL as source and GCS as sink in Google cloud service provider (GCP) which show in the diagram below.

Software architecture diagram

1. Create service account for allowed Kafka connect to access GCS

Create service account

This service account allowed the Kafka connector inside Kafka Connect to access a bucket in the GCS.

Generate key from service account

Next, we need to generate a service account key to use with the Kafka connector, and don’t forget to set this service account to access GCS; in my case, I configure my service account to be object admin since it’s a lab, but I not highly recommend to this in production, you must set permission to have properly fine gain.

2. Provision a Kubernetes cluster

GKE

We use a Kafka cluster, so we need to create a Kubernetes cluster to deploy our Kafka cluster, and I highly recommend provisioning this with a private VPC.

3. Provision database to be a source

Create a database

Provision a MySQL 2019 database server to create a source for the Kafka connector. Moreover, this article provides a database with a development mode for cost-saving and private VPC.

Create database instance

Next, we need to create a database instance to be used for data collection in this tutorial, which I will name “animal”.

Create table and seed some data
create table random (
id INT,
name VARCHAR(50)
);

insert into random (id, name) values (1, 'Waterbuck, defassa');
insert into random (id, name) values (2, 'Dromedary camel');
insert into random (id, name) values (3, 'Sheep, american bighorn');
insert into random (id, name) values (4, 'Common zebra');
insert into random (id, name) values (5, 'Greater kudu');
insert into random (id, name) values (6, 'Pygmy possum');
insert into random (id, name) values (7, 'Kangaroo, black-faced');
insert into random (id, name) values (8, 'Cockatoo, slender-billed');
insert into random (id, name) values (9, 'Duck, blue');
insert into random (id, name) values (10, 'Elk, Wapiti');

After that, create a new table in the database instance and seed some data into the table, which I name “random”.

CDC
EXEC msdb.dbo.gcloudsql_cdc_enable_db 'animal'

EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'random',
@role_name = N'CDC'

EXECUTE sys.sp_cdc_help_change_data_capture
@source_schema = N'dbo',
@source_name = N'random'

Last, create a change data capture (CDC) to let MySQL monitor changes from the table and store the data in a data warehouse, and this data will transfer into a Kafka broker by using Kafka connect.

4. Provision GCS bucket as sink

Create bucket

First, we need to provision a bucket to be a sink, which allows Kafka Connect to transfer data into this bucket.

Create folder

Next, I create a folder named “animal” to be a sink folder to store data from the database when new data is ingressed into the system.

6. Config Kafka cluster to allowed authorize user

In this article, I provide a Helm chart for you, which you can follow by following the instructions in the README.md file; however, I will explain the configuration for you later in this article.

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: cluster
namespace: kafka
spec:
kafka:
version: 3.3.1
replicas: 3
listeners:
- name: plain
port: 9092
tls: false
type: internal
- authentication:
type: tls
name: tls
port: 9093
tls: true
type: internal
authorization: # <-- Config the allowe authorize user in this section to allowed
superUsers:
- CN=user
type: simple
config:
default.replication.factor: 3
inter.broker.protocol.version: "3.3"
min.insync.replicas: 2
offsets.topic.replication.factor: 3
transaction.state.log.min.isr: 2
transaction.state.log.replication.factor: 3
storage:
type: jbod
volumes:
- deleteClaim: false
id: 0
size: 100Gi
type: persistent-claim
zookeeper:
replicas: 3
storage:
deleteClaim: false
size: 100Gi
type: persistent-claim
entityOperator:
topicOperator: {}
userOperator: {}

First, we need to create a list of authorized users to allow users to access the Kafka cluster in the authorization section. For example, I allowed the user name “user” to be a super user,” which means that in this article I use this user in every service, but the best practice is to create a user for each service. Moreover, the reason that I create only one user is that I want to simplify the flow so you can understand the component easier.

7. Create Kafka user

Next, we need to create a Kafka user to be the user for the components to access the Kafka cluster.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
namespace: kafka
name: user
labels:
strimzi.io/cluster: cluster
spec:
authentication:
type: tls
authorization:
acls:
- operation: All
resource:
name: '*'
patternType: literal
type: topic
- operation: All
resource:
name: '*'
patternType: literal
type: group
- operation: All
resource:
type: cluster
type: simple

8. Generate certificate for the client

After creating a Kafka user, we need to generate a certificate for the client so the server can validate the client using this certificate, and I create a Makefile to generate the script.

SHELL := /usr/bin/env bash

help:
@echo ""
@echo "Usage: make [TARGET] [OPTIONAL_ARGUMENTS]"
@echo ""
@echo "Where:"
@echo " cluster=<cluster_name> clustername"
@echo " user=<username> username"
@echo " password=<password> password"
@echo ""
@echo "example"
@echo "Usage: make create-secret cluster=datalake-k8s-staging user=user password=password"
@echo "Usage: make create-secret cluster=datalake-k8s-prod user=user password=password"

create-secret:
mkdir -p $(user)

## generate file in local: ca.crt
kubectl get secret $(cluster)-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 --decode > $(user)/ca.crt

keytool -keystore $(user)/truststore.jks -storepass $(password) -alias CARoot -import -file $(user)/ca.crt -noprompt

## generate file in local: user.crt, user.key, user.p12
kubectl get secret $(user) -n kafka -o yaml -o jsonpath='{.data.user\.crt}' | base64 --decode > $(user)/user.crt
kubectl get secret $(user) -n kafka -o yaml -o jsonpath='{.data.user\.key}' | base64 --decode > $(user)/user.key
openssl pkcs12 -export -in $(user)/user.crt -inkey $(user)/user.key -password pass:$(password) -out $(user)/keystore.p12

## create k8s secret as truststore keystore
kubectl create secret generic $(user)-auth-tls \
--from-file=$(user)/truststore.jks \
--from-file=$(user)/keystore.p12 \
--from-literal=password=$(password) \
-n kafka

@echo "Cluser name: $(cluster)"
@echo "Username: $(user)"
@echo "Secret name which store certificate: $(user)-auth-tls"

## generate file in local: pass
echo $(password) > $(user)/pass

Then, run this script below; it will generate a certificate and save it into Kubernetes’ secret store.

make create-secret cluster=cluster user=user password=123qweasdzxc

How to config Kafka UI to have secure connection ?

To allow Kafka UI to access the Kafka cluster with TLS/SSL, we need to use the client certificate that we generated in the previous section. Next, we need to mount a secret in volume (1) and use the certificate from this file (2). Following that, in the Kafka UI, configure the “trust store”, “keystore” path, and password (3), as shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kafka
name: kafka-ui
labels:
app: kafka-ui
spec:
replicas: 1
selector:
matchLabels:
app: kafka-ui
template:
metadata:
labels:
app: kafka-ui
spec:
volumes: # <- Create a volument which mount from secret (1)
- name: kafka-user-auth-tls
secret:
secretName: user-auth-tls
containers:
- name: ui
image: provectuslabs/kafka-ui:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts: # <- Mount secret volume into
- name: kafka-user-auth-tls
mountPath: "/tmp"
readOnly: true
env:
- name: KAFKA_CLUSTERS_0_NAME
value: "cluster"
- name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
value: "http://cluster-kafka-bootstrap:9093"
# NEED TO CONFIG THIS TO CONNECT WITH TLS (3)
- name: KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL
value: SSL
- name: KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION
value: /tmp/truststore.jks
- name: KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: user-auth-tls
key: password
optional: false
- name: KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION
value: /tmp/keystore.p12
- name: KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: user-auth-tls
key: password
optional: false
# NEED TO CONFIG THIS TO CONNECT WITH TLS
- name: KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME
value: "kafka_connect_source"
- name: KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS
value: "http://kafka-connect-mssql-2019-source-cluster-connect-api:8083"

- name: KAFKA_CLUSTERS_0_KAFKACONNECT_1_NAME
value: "kafka_connect_sink"
- name: KAFKA_CLUSTERS_0_KAFKACONNECT_1_ADDRESS
value: "http://kafka-connect-gcs-sink-cluster-connect-api:8083"

You can test it by exposing the port and seeing if it works or not.

cloud container clusters get-credentials kafka-cluster --zone asia-southeast1-a --project kafka-lab \
&& kubectl port-forward --namespace kafka $(kubectl get pod --namespace kafka --selector="app=kafka-ui" --output jsonpath='{.items[0].metadata.name}') 8080:8080
Result from Kafka UI

How to config Kafka connect to have secure connection ?

In this article, we have two Kafka Connect, which are source and sink Connect, so we will have two configurations in each Kafka.

1. Source connect

To provision Kafka connect, we need to build custom images for our source, which is MySQL 2019, and you can build them with the Dockerfile and command below.

FROM quay.io/strimzi/kafka:0.33.1-kafka-3.3.1
USER root:root

RUN mkdir -p /opt/kafka/plugins
ADD https://repo1.maven.org/maven2/io/debezium/debezium-connector-sqlserver/1.9.7.Final/debezium-connector-sqlserver-1.9.7.Final-plugin.tar.gz /opt/kafka/plugins
RUN cd /opt/kafka/plugins && tar -xzf debezium-connector-sqlserver-1.9.7.Final-plugin.tar.gz && rm -rf debezium-connector-sqlserver-1.9.7.Final-plugin.tar.gz**
USER 1001
docker build - t asia.gcr.io/kafka-lab/kafka-connect-mssql2019:1.0.0 .

Next, to allow Kafka connect to communicate with Kafka cluster with TLS, we need to set a CA first (1) to be a verifier for the client to verify the certificate from the server, then (2) select a user secret for Kafka connect. In this setting, we do not need to generate our own certificate since Kafka Connect will handle it.

Moreover, we need to mount a secret into the volume in externalConfiguration, but in configuration, we need to add config.providers: directory and config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider, which allowed Kafka connect to mount a volume, and we will use this data for Kafka connector (3).

https://strimzi.io/blog/2021/07/22/using-kubernetes-config-provider-to-load-data-from-secrets-and-config-maps/

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
namespace: kafka
name: kafka-connect-mssql-2019-source-cluster
annotations:
strimzi.io/use-connector-resources: "true"
spec:
image: "asia.gcr.io/kafka-lab/kafka-connect-mssql2019:1.0.0"
version: 3.2.3
replicas: 1
bootstrapServers: "cluster-kafka-bootstrap:9093"
tls: # <- Set CA cluster certificate (1)
trustedCertificates:
- secretName: "cluster-cluster-ca-cert"
certificate: ca.crt
authentication: # <- Set user certificate (2)
type: tls
certificateAndKey:
secretName: user
certificate: user.crt
key: user.key
externalConfiguration: # <- Set monunt volume from kafka connector certificate (3)
volumes:
- name: kafka-user-auth-tls
secret:
secretName: user-auth-tls
config:
config.providers: directory,secrets
config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider
config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider
config.storage.replication.factor: 3
config.storage.topic: ms-source-connect-cluster-configs
group.id: ms-source-connect-cluster
key.converter.schemas.enable: false
offset.storage.replication.factor: 3
offset.storage.topic: ms-source-connect-cluster-offsets
status.storage.replication.factor: 3
status.storage.topic: ms-source-connect-cluster-status
value.converter.schemas.enable: false

After that, we also need to configure the Kafka connector, since Kafka Connect is just a task manager, but the item that actually processes the task is the connector. In this article, we use Debezium as our plugin to pull data from a source database into Kafka’s broker. You may think this part does not require configuring a TLS, but it does. Due to the fact that Debezium needs to create a topic inside Kafka for task history.

Furthermore, to configure Kafka connector to communicate with Kafka cluster, you need to set a “keystore” and “truststore” path and its password (1), which is the same as Kafka UI. That’s why we mount a secret in the volume.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
namespace: kafka
name: source-sql-server-connector
labels:
strimzi.io/cluster: kafka-connect-mssql-2019-source-cluster
spec:
class: io.debezium.connector.sqlserver.SqlServerConnector
tasksMax: 1
config:
connector.class: io.debezium.connector.sqlserver.SqlServerConnector
database.dbname: animal
database.encrypt: true
database.history.kafka.topic: booboo.history.random-topic
database.hostname: 10.242.128.3
database.password: 123qweasdzxc
database.port: 1433
database.server.name: cloudSQL
database.ssl: true
database.trustServerCertificate: true
database.user: sqlserver
errors.deadletterqueue.topic.name: cdc_random-topic
errors.tolerance: all
key.converter.schemas.enable: false
offset.flush.interval.ms: 15000
poll.interval.ms: 1000
snapshot.mode: initial
table.include.list: dbo.random
topic: cloudSQL.dbo.random
value.converter.schemas.enable: false
# TLS configuration (1)
database.history.kafka.bootstrap.servers: "cluster-kafka-bootstrap:9093"
database.history.producer.security.protocol: SSL
database.history.producer.ssl.keystore.location: /opt/kafka/external-configuration/kafka-user-auth-tls/keystore.p12
database.history.producer.ssl.keystore.password: 123qweasdzxc
database.history.producer.ssl.truststore.location: /opt/kafka/external-configuration/kafka-user-auth-tls/truststore.jks
database.history.producer.ssl.truststore.password: 123qweasdzxc
database.history.consumer.security.protocol: SSL
database.history.consumer.ssl.keystore.location: /opt/kafka/external-configuration/kafka-user-auth-tls/keystore.p12
database.history.consumer.ssl.keystore.password: 123qweasdzxc
database.history.consumer.ssl.truststore.location: /opt/kafka/external-configuration/kafka-user-auth-tls/truststore.jks
database.history.consumer.ssl.truststore.password: 123qweasdzxc

2. Sink connect

Like the source, we need to build a custom image for our sink, which can follow the Dockerfile and command below.

FROM quay.io/strimzi/kafka:0.33.1-kafka-3.3.1
USER root:root

RUN mkdir -p /opt/kafka/plugins
ADD https://github.com/aiven/gcs-connector-for-apache-kafka/releases/download/v0.9.0/aiven-kafka-connect-gcs-0.9.0.tar /opt/kafka/plugins
RUN cd /opt/kafka/plugins && tar -xf aiven-kafka-connect-gcs-0.9.0.tar && rm -rf aiven-kafka-connect-gcs-0.9.0.tar

USER 1001
docker build - t asia.gcr.io/kafka-lab/kafka-connect-mssql2019:1.0.0 .

Sink connectors, like source connectors, must set a CA (1) and user secret (2) but do not need to mount a certificate volume because they do not need to connect to the cluster.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
namespace: kafka
name: kafka-connect-mssql-2019-source-cluster
annotations:
strimzi.io/use-connector-resources: "true"
spec:
image: "asia.gcr.io/tildi-playground-0001/mle-kafka-connect-mssql2019:v1.0.0"
version: 3.2.3
replicas: 1
bootstrapServers: "cluster-kafka-bootstrap:9093"
tls: # <- Set CA cluster certificate (1)
trustedCertificates:
- secretName: "cluster-cluster-ca-cert"
certificate: ca.crt
authentication: # <- Set user certificate (2)
type: tls
certificateAndKey:
secretName: user
certificate: user.crt
key: user.key
config:
config.storage.replication.factor: 3
config.storage.topic: ms-source-connect-cluster-configs
group.id: ms-source-connect-cluster
key.converter.schemas.enable: false
offset.storage.replication.factor: 3
offset.storage.topic: ms-source-connect-cluster-offsets
status.storage.replication.factor: 3
status.storage.topic: ms-source-connect-cluster-status
value.converter.schemas.enable: false

In the sink connector, I set a file name prefix to be “animal/random/,” so when data was inserted into the database, it would transfer here, and then you would need to add the key, which was generated from the GCP service account, which I show early.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
namespace: kafka
name: sink-gcs-connector
labels:
strimzi.io/cluster: kafka-connect-gcs-sink-cluster
spec:
class: io.aiven.kafka.connect.gcs.GcsSinkConnector
tasksMax: 2
config:
file: /opt/kafka/LICENSE
file.compression.type: gzip
file.name.prefix: animal/random/
file.name.template: '{{timestamp:unit=yyyy}}/{{timestamp:unit=MM}}/{{timestamp:unit=dd}}/{{topic}}-{{partition}}-{{start_offset}}.gz'
format.output.fields: key,value,offset,timestamp
format.output.type: jsonl
gcs.bucket.name: kafka-source-lab
gcs.credentials.json: |
{
"type": "service_account",
"project_id": "kafka-lab",
"private_key_id": "<private_key_id>",
"private_key": "<private_key>",
"client_email": "kafka-lab@kafka-lab.iam.gserviceaccount.com",
"client_id": "<client_id>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/kafka-lab%kafka-lab.iam.gserviceaccount.com"
}
key.converter.schemas.enable: false
tasks.max: 1
topics: cloudSQL.dbo.random
value.converter.schemas.enable: false

After several configurations, you can execute the command below to see the result.

helm install kafka-cluster kafka-cluster-chart  \
--values kafka-cluster-chart/values.yaml \
--namespace kafka
Helm install

However, you will get some errors since it’s an error from being unable to find the certificate that you need to generate manually, as I mentioned earlier. After you generate a certificate, the error will disappear, and you can check the result from the Kafka UI and GCS bucket.

First time install cluster (left), After generate client certificate
Result in GCS bucket
Result in Kafka UI

How to config Schema Registry to have secure connection ?

Next, configure the schema registry to support TLS connections. The method will be the same as Kafka’s UI: mount secret into the volume (1), and then mount a secret file into container (2). Last, we configure the trust store, key store, and its password (3).

apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-cp-schema-registry
labels:
app: cp-schema-registry
chart: cp-schema-registry-0.1.0
release: schema-registry
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: cp-schema-registry
release: schema-registry
template:
metadata:
labels:
app: cp-schema-registry
release: schema-registry
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "5556"
spec:
securityContext:
fsGroup: 10001
runAsGroup: 10001
runAsNonRoot: true
runAsUser: 10001
volumes: # <- Mount volume with secret (1)
- name: kafka-user-auth-tls
secret:
secretName: user-auth-tls
- name: jmx-config
configMap:
name: release-name-cp-schema-registry-jmx-configmap
containers:
- name: prometheus-jmx-exporter
image: "solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143"
imagePullPolicy: "IfNotPresent"
command:
- java
- -XX:+UnlockExperimentalVMOptions
- -XX:+UseCGroupMemoryLimitForHeap
- -XX:MaxRAMFraction=1
- -XshowSettings:vm
- -jar
- jmx_prometheus_httpserver.jar
- "5556"
- /etc/jmx-schema-registry/jmx-schema-registry-prometheus.yml
ports:
- containerPort: 5556
resources:
{}
volumeMounts:
- name: jmx-config
mountPath: /etc/jmx-schema-registry
- name: cp-schema-registry-server
image: "confluentinc/cp-schema-registry:6.2.0"
imagePullPolicy: "IfNotPresent"
volumeMounts: # <- Mount volume with container (2)
- name: kafka-user-auth-tls
mountPath: /etc/schema-registry/secrets
readOnly: true
ports:
- name: schema-registry
containerPort: 8081
protocol: TCP
- containerPort: 5555
name: jmx
resources:
{}
env:
- name: SCHEMA_REGISTRY_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SCHEMA_REGISTRY_LISTENERS
value: http://0.0.0.0:8081
# Config TLS (3)
- name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
value: cluster-kafka-bootstrap:9093
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: user-auth-tls
key: password
optional: false
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: user-auth-tls
key: password
optional: false
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD
valueFrom:
secretKeyRef:
name: user-auth-tls
key: password
optional: false
- name: SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL
value: "SSL"
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION
value: "/etc/schema-registry/secrets/keystore.p12"
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION
value: "/etc/schema-registry/secrets/truststore.jks"
- name: SCHEMA_REGISTRY_SSL_KEYSTORE_TYPE
value: "PKCS12"
- name: SCHEMA_REGISTRY_SSL_TRUSTSTORE_TYPE
value: "JKS"
# Config TLS
- name: SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID
value: release-name
- name: SCHEMA_REGISTRY_MASTER_ELIGIBILITY
value: "true"
- name: SCHEMA_REGISTRY_HEAP_OPTS
value: "-Xms512M -Xmx512M"
- name: JMX_PORT
value: "5555"

Summary

Right now, we can set up our broker to have a secure connection, which provides better security for our connection. However, this article does not explain how to precisely gain permission or an ACL, which I highly recommend reading in this blog. Moreover, the example that I show about Kafka Connect depends on third-party configuration, so if your project uses another third party, you should read the document carefully, but in my opinion, the way to configure it will be similar.

Reference

https://strimzi.io/blog/2021/07/22/using-kubernetes-config-provider-to-load-data-from-secrets-and-config-maps/

--

--