Connect Apache Kafka instances in CloudKarafka

Message streaming as a Service

When I was preparing my talk in Pycon tw 2017, I tried different ways to build my own Kafka broker, including my previous post (Running Apache kafka on docker). Then, I found CloudKarafka.

Cloudkarafka is a streaming platform in the public cloud, designed for Apache Kafka workloads. There are different animal plans (paid plans) to create your own Kafka instances.

Dedicated Instances

If you just want to try Kafka basic producer and consumer API (like me), you can choose DEVELOP DUCK (FREE PLAN). I will show how to apply this Free instance and connect by the python client (Kafka-python). You can also check the documentation from their website.

Free Shared Plan

1.Login/Register

https://www.cloudkarafka.com/
login/sigup

2. Create new instance

new instance
instance name and plan

Only one region of AWS data center (US-East-1) is offered now in free plan.

3. Details of the instance

Cloudkarafka provides dashboard to monitor brokers and management (more screen shots in the bottom of this article) . However, in free plan, there are only few features we can access.

Details

4. Connect the broker by python client

The official documentation has showed how to connect to CloudKarafka broker by kafka-python. (https://www.cloudkarafka.com/docs-python.html)

However, there is only a small part which may make users feel confused- certification key.

Where you can get certification files to connect brokers?

The sample code from Cloudkarafka use many environment variables (shown as the below) to setup values of client certification ( for a ssl_context object) and broker.

'CLOUDKARAFKA_CA': 
value for cadata object, usually
an ASCII string of one or more PEM-encoded certificates or a bytes-like object of DER-encoded certificates.
'CLOUDKARAFKA_CERT': 
certfile, the path to a single file in PEM format containing the certificate as well as any number of CA certificates needed to establish the certificate's authenticity.
'CLOUDKARAFKA_PRIVATE_KEY': 
keyfile,the path which point to a file containing the private key in.
'CLOUDKARAFKA_BROKERS': broker address
'CLOUDKARAFKA_TOPIC_PREFIX': 
prefix for topic #this is necessary unless you create a new topics

It is a reasonable and proper way to access configuration variables. However, if you are not familiar with writing configuration into your client environment variables or don’t want to follow this way by some reasons, I provide another way to setup configuration and connect Cloudkarafka brokers.

Firstly, you need to download a certificates file (.env file). If you open it with text editor, you will see the configuration values for CLOUDKARAFKA_CA, CLOUDKARAFKA_CERT, CLOUDKARAFKA_PRIVATE_KEY, and broker address. All you need are in this file.

I saved those certificate information in three files, then created settings.py to read them when I need to connect brokers.

CLOUDKARAFKA_CA: “~/.ssh/cloud_kafka/ca_cert.pem”
CLOUDKARAFKA_CERT: “~/.ssh/cloud_kafka/signed_cert.pem”
CLOUDKARAFKA_PRIVATE_KEY: “~/.ssh/cloud_kafka/private_key.pem”

Please check my sample client code.

Ref:

1. Python client for Cloudkarafka
2. kafka-python]maintained by Parse.ly


如果你想親手測試Apache Kafka Producer和 Consumer API的使用方式,但是不想(不會)建立自己的Kafka Broker, CloudKarafka 是一個好選擇。CloudKarafka 提供了不同付費方案提供使用者建立Kafka instances。

在此我介紹如何申請和透過Python client (Kafka-python) 來連結這個免費共享的雲端Kafka broker。其中特別說明,如何不透過設定環境變數來取得cloudkarafka 提供的SSL 憑證內容,以連結到 brokers的方式。請參考本文和sample code


Screenshots for the dashboard of Cloudkarafka

Topics
Nodes information
Server metrics
Logs
Certificate
Management
New management dashboard
Integration with other services
VPC