Using Kafka Without “Writing” Code

Kafka has good integration with Java spring, as you can see here. Spring provides KafkaTemplate and annotations to easily produce and consume kafka.

But not everybody uses Spring. Sometimes even team working on other programming language than java. This means, learning kafka needs another learning curve, especially when uses native library. If the kafka usage is only basic publish & consume, we can shorten this learning curve, by using kafka without coding.

How we do that?

Basically, we still write codes to publish and consume from kafka. But instead using native library, we use REST API. Most developers are familiar with REST API, so learning curve generally shorter.

Sad news : Kafka does not comes with REST API as built-in feature. Fortunately, confluent provides Kafka Rest Proxy as open source project to access kafka using API. This means, we got Kafka REST API as ready-to-use-(and reliable) product, and no need to write our own Kafka REST API.

We can even use Confluent Kafka Rest proxy with binary, json, or avro message. When we use schema-based message like avro, Confluent Kafka REST Proxy also integrates well with Confluent Schema Registry.

Kafka REST Proxy with Schema Registry Architecture

Using Kafka REST Proxy, we can have non java producers and consumer, which talks to kafka REST proxy, via REST API for publish and consume. The one who talks with kafka broker, and schema registry, is the REST proxy

However, using REST API has drawbacks compared to native library, which is slower performance, and less error handler mechanism. Use either REST API or native library, depends on your need. In case you need basic produce-consume, and no time for your team to learn, Kafka REST proxy can be a solution. However, learning Kafka native coding might still the best option.

Let’s see a little sample to produce message using Kafka REST Proxy.

To produce data using API, we must set content type header on API request. For producing, we will use v2 endpoints, and the rule is this. There are 4 segments, and the difference is on segment 2. We need to set segment 2, depends on what kind of data we will publish. For example, see the following picture, where we will change the segment 2 depends on what data will publish (binary, json, or avro).

request header

Assuming kafka REST Proxy runs on localhost:8082, here are sample curl to publish message to kafka. We define topic name on the URL. For example, to publish to topic my-kafka-topic-name we must use URL like this

http://localhost:8082/topics/my-kafka-topic-name

Sample CURL : Publish Binary Message

curl --location --request POST 'http://localhost:8082/topics/my-topic-from-api-binary' \
--header 'Content-Type: application/vnd.kafka.binary.v2+json' \
--data-raw '{
"records": [
{
"key": "kafka record key",
"value": "SnVzdCBzb21lIHZhbHVlIGZvciBrYWZrYQ=="
}
]
}'

For example, to produce binary, we can use this endpoint. The request content type header already set. To publish binary, we must encode the binary data as base64 string. On example above, the value is base64 encoded for string Just some value for kafka.

We can publish several data at once, on the array “records”.

Sample CURL : Publish JSON Message

curl --location --request POST 'http://localhost:8082/topics/my-topic-from-api-json' \
--header 'Content-Type: application/vnd.kafka.json.v2+json' \
--data-raw '{
"records": [
{
"key": "this record has key",
"value": {
"name": "Steve Rogers",
"gender": "MALE"
}
},
{
"value": [
{
"name": "Anna",
"gender": "FEMALE"
},
{
"name": "Olaf",
"gender": "MALE"
}
]
},
{
"key":"this is another record key",
"value": [
109,
289,
442
]
}
]
}'

Producing json is almost same with producing binary. Just keep in mind that the content-type request header must be changed. See the body, on records. On this sample, we will publish several json records at once. The value field is the json body, so we can give any valid json format as a value.

Sample CURL : Publish Avro Message

Sample 1

curl --location --request POST 'http://localhost:8082/topics/my-topic-from-api-avro' \
--header 'Content-Type: application/vnd.kafka.avro.v2+json' \
--data-raw '{
"value_schema": "{\"type\":\"record\", \"name\":\"MySchema\", \"fields\":[{\"name\":\"field1\", \"type\":\"string\"}]}",
"records": [
{
"value": {
"field1": "A sample"
}
}
]
}'

To produce avro, we must adjust the content type request header to avor. In the first example, see the request body. In there, we define the schema and records. If we send it, we will get value_schema_id (for example : 402).

See that on 2nd sample below, the request body has field value_schema_id In case we already get the value schema id, we don’t need to embed avro schema into request body. Just inform value_schema_id the and execute.

Sample 2

curl --location --request POST 'http://localhost:8082/topics/my-topic-from-api' \
--header 'Content-Type: application/vnd.kafka.avro.v2+json' \
--data-raw '{
"value_schema_id": 402,
"records": [
{
"value": {
"field1": "this is another value for field1"
}
}
]
}'

What about consuming using Kafka REST Proxy? Consuming from API needs several steps, all through REST API :

  1. Create consumer
  2. Subscribe this consumer to certain topic
  3. Start consuming

All these steps can be done through different endpoints. On create and subscribe, we must set content type request header as explained before. On consume, we must set Accept request header. The rule is the same with content-type request header (change the segment 2).

Sample CURL : Consume Binary Message

The following are three sample endpoints, that must be runs in sequence, to consume binary message.

1. Create Consumer
curl --location --request POST 'http://localhost:8082/consumers/myConsumerGroupFromApi' \
--header 'Content-Type: application/vnd.kafka.binary.v2+json' \
--data-raw '{
"name": "avroConsumerFromApi-binary",
"format": "binary",
"auto.offset.reset": "earliest",
"auto.commit.enable": "true"
}'
2. Subscribe to topic
curl --location --request POST 'http://localhost:8082/consumers/myConsumerGroupFromApi/instances/avroConsumerFromApi-binary/subscription' \
--header 'Content-Type: application/vnd.kafka.binary.v2+json' \
--data-raw '{
"topics": [
"my-topic-from-api-binary"
]
}'
3. Consume from topic
curl --location --request GET 'http://localhost:8082/consumers/myConsumerGroupFromApi/instances/avroConsumerFromApi-binary/records' \
--header 'Accept: application/vnd.kafka.binary.v2+json'

Sample CURL : Consume JSON Message

The following are three sample endpoints, that must be runs in sequence, to consume JSON message.

1. Create Consumer
curl --location --request POST 'http://localhost:8082/consumers/myConsumerGroupFromApi' \
--header 'Content-Type: application/vnd.kafka.json.v2+json' \
--data-raw '{
"name": "avroConsumerFromApi-json",
"format": "json",
"auto.offset.reset": "earliest",
"auto.commit.enable": "true"
}'
2. Subscribe to topic
curl --location --request POST 'http://localhost:8082/consumers/myConsumerGroupFromApi/instances/avroConsumerFromApi-json/subscription' \
--header 'Content-Type: application/vnd.kafka.json.v2+json' \
--data-raw '{
"topics": [
"my-topic-from-api-json"
]
}'
3. Consume from topic
curl --location --request GET 'http://localhost:8082/consumers/myConsumerGroupFromApi/instances/avroConsumerFromApi-json/records' \
--header 'Accept: application/vnd.kafka.json.v2+json'

Sample CURL : Consume Avro Message

The following are three sample endpoints, that must be runs in sequence, to consume Avro message.

1. Create Consumer
curl --location --request POST 'http://localhost:8082/consumers/myConsumerGroupFromApi' \
--header 'Content-Type: application/vnd.kafka.avro.v2+json' \
--data-raw '{
"name": "avroConsumerFromApi-avro",
"format": "avro",
"auto.offset.reset": "earliest",
"auto.commit.enable": "true"
}'
2. Subscribe to topic
curl --location --request POST 'http://localhost:8082/consumers/myConsumerGroupFromApi/instances/avroConsumerFromApi-avro/subscription' \
--header 'Content-Type: application/vnd.kafka.avro.v2+json' \
--data-raw '{
"topics": [
"my-topic-from-api-avro"
]
}'
3. Consume from topic
curl --location --request GET 'http://localhost:8082/consumers/myConsumerGroupFromApi/instances/avroConsumerFromApi-avro/records' \
--header 'Accept: application/vnd.kafka.avro.v2+json'

So, without learning native code (although it’s certainly worth a lot), you or your teammate can works with Kafka.

For more information on Kafka, including avro, & Kafka REST Proxy, see this video.

Cheers!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Timotius Pamungkas

Timotius Pamungkas

A father, a husband, a tech enthusiast. In that particular order.