Introduction to Apache Kafka With Spring

Otavio Santana
xgeeks
Published in
3 min readOct 27, 2021
Photo by Hello I’m Nik on Unsplash

Apache Kafka is a community-distributed streaming platform that has three key capabilities: publish and subscribe to streams of records, store streams of records in a fault-tolerant durable way, and then process streams as they occur. Apache Kafka has several successful cases in the Java world. This post will cover how to benefit from this powerful tool in the Spring universe.

Apache Kafka Core Concepts

Kafka is run as a cluster on one or more servers that can span multiple data centers; Kafka cluster stores a stream of records in categories called topics, and each record consists of a key, a value, and a timestamp.

From the documentation, Kafka has four core APIs:

  • The Producer API allows an application to publish a stream of records to one or more Kafka topics.
  • The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them.
  • The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
  • The Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems.

Using Docker

There is also the possibility of using Docker. As it requires two images, one to Zookeeper and one to Apache Kafka, this tutorial will use docker-compose. Follow these instructions:

Then, run the command:

To connect as localhost, also define Kafka as the localhost within Linux, append the value below at the /etc/hosts:

Application With Spring

To explore Kafka, we’ll use the Spring-kafka project. In the project, we’ll simple a name counter, which is based on a request it will fire an event to a simple counter in memory.

The Spring for Apache Kafka (spring-kafka) project applies core Spring concepts to the development of Kafka-based messaging solutions. It provides a “template” as a high-level abstraction for sending messages. It also provides support for Message-driven POJOs with @KafkaListener annotations and a “listener container”. These libraries promote the use of dependency injection and declarative. In all these cases, you will see similarities to the JMS support.

The first step in a Spring project maven based, where we’ll add Spring-Kafka, spring-boot-starter-web.

Spring-kafka by default uses the String to both serializer and deserializer. We’ll overwrite this configuration to use JSON where we’ll send Java objects through JSON.

The first class is a configuration to either create a topic if it does not exist. Spring has a TopicBuilder to define the name, partition, and replica.

KafkaTemplate is a template for executing high-level operations in Apache Kafka. We’ll use this class in the name service to fire two events, one to increment and another one to decrement, in the Kafka.

Once we talked about the producer the KafkaTemplatethe next step is to define a consumer class. A Consumer class will listen to a Kafka event to execute an operation. In this sample, the NameConsumer will listen to events easily with the KafkaListener annotation.

To conclude, we see the potential of Apache Kafka and why this project became so accessible to Big-Data players. This is a simple example of how secure it is to integrate with Spring.

If you enjoy working on large-scale projects with global impact and if you like a real challenge, feel free to reach out to us at xgeeks! We are growing our team and you might be the next one to join this group of talented people 😉

Check out our social media channels if you want to get a sneak peek of life at xgeeks! See you soon!

References:

--

--

Otavio Santana
xgeeks
Editor for

Empowering developers worldwide to deliver better software in the Cloud. Staff Eg xgeeksio