Installation of Apache Kafka with SSL on Ubuntu 16.04 , 18.04 and 20.04

Before ! lets understand Kafka and why only Apache Kafka.

Introduction :-
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

Installation of Apache Kafka with SSL on Ubuntu 16.04 , 18.04 and 20.04

Kafka provides an asynchronous protocol for connecting programs together, but it is undoubtedly a bit different from, say, TCP (transmission control protocol), HTTP, or an RPC protocol. The difference is the presence of a broker. A broker is a separate piece of infrastructure that broadcasts messages to any programs that are interested in them, as well as storing them for as long as is needed. So it’s per‐ fact for streaming or fire-and-forget messaging.

Kafka’s security :- Kafka Streams natively integrates with the Kafka’s security features and supports all of the client-side security features in Kafka. Streams leverages the Java Producer and Consumer API.

To secure your Stream processing applications, configure the security settings in the corresponding Kafka producer and consumer clients, and then specify the corresponding configuration settings in your Kafka Streams application.

Kafka supports cluster encryption and authentication, including a mix of authenticated and unauthenticated, and encrypted and non-encrypted clients. Using security is optional.

Prerequisites :-
A. To run properly at least 4 GM RAM Server is needed for Apache Kafka
B. Apache Kafka written on JAVA so it will require to have java platform (JVM) on your server. JVM versions above 8.

Step 1 :- Create a User on system
Step 2 :- Download and Install Kafka binaries
Step 3 :- Setup Kafka SSL Configuration (Create a Public & Private Key)
Step 4 (Option A) :- if you want to setup only One Kafka broker
Step 4 (Option B) :- if you want to setup three 3 Kafka brokers
Step 5
:- Create a file for kafka client
Step 6 :- Create a topic , Producer and Consumer
Step 7 :- Connect from Outside (Offset Explorer kafka Tool)

Step 1 :- Create a User on system

create a user called kafka with the useradd command:

Logged in as your non-root sudo user, create a user called kafka with the useradd command:

$ sudo useradd kafka

Set the password using passwd:

$ sudo passwd kafka

Add the kafka user to the sudo group with the adduser command

$ sudo adduser kafka sudo

Your kafka user is now ready. Log into this account using su:

$ su -l kafka

Step 2 :- Download and Install Kafka binaries
create a directory in /home/kafka called Downloads to store your downloads:

$ mkdir ~/Downloads

Use curl to download the Kafka binaries:

$ curl “https://www.apache.org/dist/kafka/2.1.1/kafka_2.11-2.1.1.tgz" -o ~/Downloads/kafka.tgz

Create a directory called kafka and change to this directory. This will be the base directory of the Kafka installation:

$ mkdir ~/kafka && cd ~/kafka

Extract the archive you downloaded using the tar command:

$ tar -xvzf ~/Downloads/kafka.tgz — strip 1

Step 3 :- Setup Kafka SSL Configuration (Create a Public & Private Key)

Steps For SSL Security
Security between Kafka and Kafka Clinet
1. Fix the Certificate Authority “CA” ( One time job)
2. Create the Truststore
3. Create the Keystore
4. Create Certificate Signing request CSR
5. Sign in Certificate using CA
6. Import the CA into Keystore
7. Import the signed certificate into keystore

Note :- Steps 2 to 7 should be performed on each broker.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — -
Make a Directory where you keep the key and certificate.

$ mkdir ssl

go inside the directory

$ cd ssl

Now we have to follow below steps
1. Fix the Certificate Authority (CA)

$ openssl req -new -x509 -keyout ca-key -out ca-cert -days 365

after execute the command it will ask you few Questions
A. Enter PEM pass phrase for ca-key ( Password keep it save )
B. Country Name — UK
C. State or Provice Name — London
D. Locality Name — London
E. Organization Name — PBX Pvt Ltd
F. Organization Unit Name — PBX
G. Common Name (Server name FQDN) — localhost
H. Email Address — abc@yahoo.com
Check what are the things it has created. it will generate ca-cert(Public) and ca-key(Private)

$ ls {ca-cert , ca-key}

— — — — — — — — — — — — — — — — — — — — — — — — — — — — -
“lets Secure kafka broker 1”
Note :- Pay attention here you are generating this jks key for broker, if you have multiple brokers than you have to run 2 to 6 steps accordingly just change the name — kafka.server1.keystore.jks , kafka.server2.keystore.jks and kafka.server3.keystore.jks
Note :-
keypass = Password For the Private key
storepass = Password For the TrustStore and KeyStore

i will highly recommend to keep all the password same so confused will not take a place.
— — — — — — — — —
2. Create the Truststore

$ keytool -keystore kafka.server1.truststore.jks — alias CARoot -import -file ca-cert -storepass <PassForTrust> -keypass <PassOfPrivateKey> -noprompt

3. Create the Keystore

$ keytool -genkey -keystore kafka.server1.keystore.jks -validity 365 -storepass <PassForKeystore> -keypass <PassOfPrivateKey> -ext SAN=dns:localhost -storetype pkcs12

after excute the command it will ask you few Questions
A. What is your First and Last Name — localhost
B. What is the name of Organization Unit — PBX
C. What is the name of Organization — PBX Pvt Ltd
D. What is the name of City or Locality — London
E. What is the name of your State or Province — London
F. What is the two-letter country code for this unit — UK
Check what are the things it has created.

$ ls (ca-cert , ca-key , kafka.server1.keystore.jks)

4. Create Certificate Signing request

$ keytool -keystore kafka.server1.keystore.jks -certreq -file cert-file -storepass <PassOfKeystore> -keypass <PassOfPrivateKey>

5. Sign in Certificate using CA

$ openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:<PassOfPrivateKey>

6. Import the CA into Keystore

$ keytool -keystore kafka.server1.keystore.jks — alias CARoot -import -file ca-cert -storepass <PassOfKeystore> -keypass <PassOfPrivateKey> -noprompt

7. Import the signed certificate into keystore

$ keytool -keystore kafka.server1.keystore.jks -import -file cert-signed -storepass <PassOfKeystore> -keypass <PassOfPrivateKey> -noprompt

Note :- Now we have few keys, some of things are unwanted lets remove that.
$ rm cert-file cert-signed

$ rm rf cert-file cert-signed

— — — — — — — — — — — — —
“lets Secure kafka Client”
Wait a minute ! What is Kafka Client ?
Kafka client communicates with the Kafka brokers via the network for writing (or reading) events. Once received, the brokers will store the events in a durable and fault-tolerant manner for as long as you need — even forever.”
— — — — — — — — — — — — —
2. Create the Truststore

$ keytool -keystore kafka.client.truststore.jks — alias CARoot -import -file ca-cert -storepass <PassForTrust> -keypass <PassOfPrivateKey> -noprompt

3. Create the Keystore

$ keytool -genkey -keystore kafka.client.keystore.jks -validity 365 -storepass <PassForKeystore> -keypass <PassOfPrivateKey> -ext SAN=dns:localhost -storetype pkcs12

after excute the command it will ask you few Questions
A. What is your First and Last Name — localhost
B. What is the name of Organization Unit — PBX
C. What is the name of Organization — PBXPvt Ltd
D. What is the name of City or Locality — London
E. What is the name of your State or Provice — London
F. What is the two-letter country code for this unit — UK
Check what are the things it has created.

$ ls (ca-cert , ca-key , kafka.client.keystore.jks)

4. Create Certificate Signing request

$ keytool -keystore kafka.client.keystore.jks -certreq -file cert-file -storepass <PassOfKeystore> -keypass <PassOfPrivateKey>

5. Sign in Certificate using CA

$ openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:<PassOfPrivateKey>

6. Import the CA into Keystore

$ keytool -keystore kafka.client.keystore.jks — alias CARoot -import -file ca-cert -storepass <PassOfKeystore> -keypass <PassOfPrivateKey> -noprompt

7. Import the signed certificate into keystore

$ keytool -keystore kafka.client.keystore.jks -import -file cert-signed -storepass <PassOfKeystore> -keypass <PassOfPrivateKey> -noprompt

Step 4 :- if you want to setup only one Kafka broker

First see what are the file present in “Kafka” Directory

modify the server.properties first.. So i add few lines below. you just need to open your properties file , find and modify. if it is not there then add it. or
if you are not sure then you can copy of the code from our Repository and make changes according to your requirement.
Link :- https://github.com/YetAnotherOpportunityofLearning/apachekafkassl

kafka@dev-k8s-master:~/kafka/config$ vi server.properties# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
listeners=PLAINTEXT://0.0.0.0:9191,SSL://0.0.0.0:9091
advertised.listeners=PLAINTEXT://10.168.45.12:9191,SSL://10.168.45.12:9091

ssl.keystore.location=/home/kafka/kafka/ssla/kafka.server1.keystore.jks
ssl.keystore.password=<PassOfKeystore>
ssl.key.password=<PassOfPrivateKey>
ssl.truststore.location=/home/kafka/kafka/ssla/kafka.server1.truststore.jks
ssl.truststore.password=<PassOfTruststore>
security.inter.broker.protocol=SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
ssl.protocol=TLSv1.2
# A comma separated list of directories under which to store log files
log.dirs=/home/kafka/kafka/data/kafka1
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

Now might be you are confuse with this line
log.dirs=/home/kafka/kafka/data/kafka
I created a Additional directory for store the logs files (Kafka/broker) and store the snapshot (Zookeeper)

log.dirs=/home/kafka/kafka/data/kafka
dataDir=/home/kafka/kafka/data/zookeeper

modify the zookeeper.properties

kafka@dev-k8s-master:~/kafka/config$ vi zookeeper.properties# the directory where the snapshot is stored.
dataDir=/home/kafka/kafka/data/zookeeper
# the port at which the clients will connect
clientPort=2181
# admin.serverPort=8080
server1=localhost:2888:3888

Lets Start Zookeeper and Kafka Broker

$ cd bin
$ bin/zookeeper-server-start.sh -daemon /home/kafka/kafka/config/zookeeper.properties
$ bin/kafka-server-start.sh -daemon /home/kafka/kafka/config/server.properties

Now lets verify all the brokers are running fine or not

$ jps
Output :-
30755 Jps
17162 Kafka (Means Kafka is running)
31774 QuorumPeerMain (Means Zookeeper is running)

One More way to Verify the Broker and Zookeeper

$ bin/zookeeper-shell.sh localhost:2181
result will be look like.
Welcome to ZooKeeper!
JLine support is disabled
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
$ ls /brokers/ids
output :-
{​​​​​​ 1}​​​​​​ (it is your broker id)

Step 4 :- if you want to setup three Kafka broker
First see what are the file present in “Kafka” Directory

I will Recommend to you remove server.properties. and take files from our repository link :- https://github.com/YetAnotherOpportunityofLearning/apachekafkassl
please verify all the things are matching with your exceptions or not.

kafka@dev-k8s-master:~/kafka/config$ vi server1.properties# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
listeners=PLAINTEXT://0.0.0.0:9191,SSL://0.0.0.0:9091
advertised.listeners=PLAINTEXT://10.168.45.12:9191,SSL://10.168.45.12:9091

ssl.keystore.location=/home/kafka/kafka/ssla/kafka.server1.keystore.jks
ssl.keystore.password=<PassOfKeystore>
ssl.key.password=<PassOfPrivateKey>
ssl.truststore.location=/home/kafka/kafka/ssla/kafka.server1.truststore.jks
ssl.truststore.password=<PassOfTruststore>
security.inter.broker.protocol=SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
ssl.protocol=TLSv1.2
# A comma separated list of directories under which to store log files
log.dirs=/home/kafka/kafka/data/kafka1
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
kafka@dev-k8s-master:~/kafka/config$ vi server2.properties# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2
listeners=PLAINTEXT://0.0.0.0:9192,SSL://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.168.45.12:9192,SSL://10.168.45.12:9092

ssl.keystore.location=/home/kafka/kafka/ssla/kafka.server1.keystore.jks
ssl.keystore.password=<PassOfKeystore>
ssl.key.password=<PassOfPrivateKey>
ssl.truststore.location=/home/kafka/kafka/ssla/kafka.server1.truststore.jks
ssl.truststore.password=<PassOfTruststore>
security.inter.broker.protocol=SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
ssl.protocol=TLSv1.2
# A comma separated list of directories under which to store log files
log.dirs=/home/kafka/kafka/data/kafka1
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
kafka@dev-k8s-master:~/kafka/config$ vi server3.properties# The id of the broker. This must be set to a unique integer for each broker.
broker.id=3
listeners=PLAINTEXT://0.0.0.0:9193,SSL://0.0.0.0:9093
advertised.listeners=PLAINTEXT://10.168.45.12:9193,SSL://10.168.45.12:9092

ssl.keystore.location=/home/kafka/kafka/ssla/kafka.server1.keystore.jks
ssl.keystore.password=<PassOfKeystore>
ssl.key.password=<PassOfPrivateKey>
ssl.truststore.location=/home/kafka/kafka/ssla/kafka.server1.truststore.jks
ssl.truststore.password=<PassOfTruststore>
security.inter.broker.protocol=SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
ssl.protocol=TLSv1.2
# A comma separated list of directories under which to store log files
log.dirs=/home/kafka/kafka/data/kafka1
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

Open all the brokers and zookeeper file and verify.

Now might be you are confuse with this line
log.dirs=/home/kafka/kafka/data/kafka
I created a Additional directory for store the logs files (Kafka/broker) and store the snapshot (Zookeeper)

log.dirs=/home/kafka/kafka/data/kafka1 log.dirs=/home/kafka/kafka/data/kafka2 log.dirs=/home/kafka/kafka/data/kafka3

dataDir=/home/kafka/kafka/data/zookeeper

Lets Start Zookeeper and Kafka Brokers

$ cd bin

Lets Start Zookeeper and Kafka Broker

$ cd bin
$ bin/zookeeper-server-start.sh -daemon /home/kafka/kafka/config/zookeeper.properties
$ bin/kafka-server-start.sh -daemon /home/kafka/kafka/config/server1.properties
$ bin/kafka-server-start.sh -daemon /home/kafka/kafka/config/server2.properties
$ bin/kafka-server-start.sh -daemon /home/kafka/kafka/config/server3.properties

Now lets verify all the brokers are running fine or not

$ jps
Output :-
30755 Jps
17162 Kafka (Means Kafka1 is running)
31774 QuorumPeerMain (Means Zookeeper is running)
9954 Kafka (Means Kafka3 is running)
1855 Kafka (Means Kafka2 is running)

One More way to Verify the Brokers and Zookeeper

$ bin/zookeeper-shell.sh localhost:2181
result will be look like.
Welcome to ZooKeeper!
JLine support is disabled
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
$ ls /brokers/ids
output :-
{​​​​​​ 1, 2, 3}​​​​​​ (it is your brokers id)

Step 5 :- Create a file for kafka client

$ cd config
$ touch client.properties
kafka@dev-k8s-master:/home/kafka/kafka/config# vi client.properties
bootstrap.servers=0.0.0.0:9093,0.0.0.0:9091,0.0.0.0:9092,
security.protocol=SSL
ssl.protocol=TLSv1.2
ssl.keystore.location=/home/kafka/kafka/ssl/kafka.client.keystore.jks
ssl.keystore.password=<PassOfKeystore>
ssl.key.password=<PassOfPrivateKey>
ssl.truststore.location=/home/kafka/kafka/ssl/kafka.client.truststore.jks
ssl.truststore.password=<PassOfTruststore>

Step 6 :- Create a topic , Producer and Consumer

Create Topic

$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic medium

Producer

$ bin/kafka-console-producer.sh --broker-list localhost:9091,localhost:9092,localhost:9093 --topic medium --producer.config client.properties

Consumer

$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9091,localhost:9092,localhost:9093 --topic medium --consumer.config client.properties --from-beginning

Step 7 :- Connect from Outside (Offset Explorer kafka Tool)
A. Open Offset click on New Connection
Give a cluster name “anything” and zookeeper Host IP “your server IP”

Save the file/keys which you created before releted to client
add files path and password correctly

Provide Bootstrap details along with right port which you mention in server.properties

Now try to connect now.

Thanks for reading the blog please must try because “Practice make you perfect”

Don’t Forget to give us Clap and share with Other’s.

Buy Me a Coffee : - https://www.buymeacoffee.com/YAOL

Previous Blog :- https://medium.com/@Opportunity-of-Learning/installation-of-apache-kafka-on-ubuntu-16-04-18-04-and-20-04-f5edcc94e8a0

--

--