Deckard — a Declarative Approach to Making Your Spring Kafka Life Easier

Marcus Janke
idealo Tech Blog
Published in
5 min readNov 11, 2020

By Richard Remus & Marcus Janke

Photo by Maxwell Nelson on Unsplash

tl;dr

Did you ever wonder why you’re writing so much configuration to simply send messages to a Kafka topic? Have you produced boilerplate code like the following, over and over again?

Creating a topic is one line on the CLI or a couple of clicks in a UI away. Configuring a producer in Spring Kafka takes longer and feels extremely repetitive with each new service that produces messages. Even more so when your application produces messages to several different topics. Spring Kafka handles most of the complexity of interacting with Kafka. However, we could further reduce the amount of boilerplate code required by Spring Kafka. That’s where Deckard comes to the rescue! The above configuration could simply be replaced by the following:

Deckard will autogenerate a producer for you which then in turn allows you to send messages as with any KafkaTemplate. That’s it. Cool, right?

Why Not Just Use Kafka Streams and Spring Kafka Binders?

Avid Spring developers might ask this question and rightly so. Spring Cloud Streams and Kafka Binders supply a very transparent, high-level Kafka processing framework. Rather than fully fledged stream processing, Deckard is aimed at helping developers to easily make their microservices produce Kafka messages without having to buy into a whole streaming framework. Deckard’s concise API also plays into that idea by allowing developers to just declare what the producers have to do, rather than actually implementing them. This concept might remind one of Spring Data Repositories or Spring Cloud Feign Clients, which is not an accident.

How Deckard Works

Let’s have a detailed look at how you will use Deckard to let your application produce messages. Deckard’s main features are, as of writing this:

  • declarative definition and automated bootstrapping of Kafka Producers
  • simple and detailed configuration of Serializers per individual producer
  • symmetric payload encryption
  • targeting multiple different Kafka Clusters
  • support for property placeholders via SpEL

Simple Kafka Message Producers

Deckard dependencies are publicly available and can be used with your favourite build automation tool.

In case Maven is your friend, just add the following to your dependencies:

Or, if you like Gradle more, just add:

Now, in order to add a Deckard Kafka Producer to your app, you start by extending the GenericProducer interface to sketch out your specific producer. This lets you supply the key and payload types for any messages the producer should publish. The annotation @KafkaProducer lets you supply the target topic you want the producer to publish to. Deckard uses the JsonSerializer for keys and payloads per default, but you can also provide other serializers from Spring Kafka as well as your own.

If you want to use any custom serializers, you can provide them as Spring beans and let the Kafka Producer pick them up by using the annotation parameters keySerializerBean and valueSerializerBean respectively. And that's basically it. Any other Kafka configuration can be done via the Spring Kafka properties, such as configuring the target Kafka Cluster with the bootstrap-servers, as well as consumer group and client ids property and so on. Deckard will respect all Spring Kafka properties and apply them to every @KafkaProducer you define.

On startup, Spring will bootstrap the defined producers and supply them as Spring Beans which match the specified interfaces. So to integrate the producer with any of your components, you can just leverage dependency injection:

The send method will publish the supplied payload along with the given key. Pretty simple, huh?

Targeting Multiple Kafka Clusters

When more and more teams at idealo integrated Kafka for inter-service communication, we became aware that sometimes, an app has to send messages to multiple different Kafka clusters. With Deckard you can just add the target brokers in the@KafkaProducer annotation:

Be aware, that the topic has to exist on all selected brokers. This can also be quite helpful, when your default Spring Kafka properties already define a Kafka cluster, but you want to send certain messages to another cluster.

Payload Encryption

Kafka already supports SSL for client-broker and inter-broker communication along with client authentication. But when you’re dealing with sensitive information, you might want to ensure that the Kafka logs only contain encrypted data, so that your information on the brokers themselves is also secure. An easy way to approach this, is payload encryption.

Unsurprisingly, the encryption can be configured via @KafkaProducer:

This will encrypt the output of any configured value serializer using AES with salt. In order to decrypt any such payloads, Deckard also provides a DecryptingDeserializer. Any Kafka consumer receiving the encrypted messages, can decrypt them by instantiating this deserializer. The DecryptingDeserializer will wrap any kind of Deserializer, so for example, if the payloads were serialized as String and then encrypted, your deserializer should be built like this:

We chose AES, since it’s a little bit easier to share the keys this way, rather than handling public and private key pairs.

Anyhow, eagle-eyed and security savvy readers might have noticed, that the above example suggests to store password and salt plain within the application code. Since this is a terrible habit, let’s find out how to circumvent this in the next section:

Using Spring Property Placeholders

You can use Spring Property Placeholders in any @KafkaProducer configuration, in order to resolve properties on runtime. This is very useful in order to deal with lengthy property values or sensitive configurations, such as passwords. Using this, we can refactor the encryption example:

In other words, Deckard allows us to fetch the password and salt from a secure location, for example a Spring Cloud Config Server or the container environment during runtime, keeping our code free from credentials and letting us sleep (more) well at night.

Conclusion

As you can see, Deckard allows you to easily add Kafka Producers to any Spring Boot application, without any need for lengthy boilerplate code.

The configuration of the producers is simple and integrates with conventional Spring Kafka, while also adding some very convenient features. Especially producing messages to multiple different clusters and message encryption have never been easier.

If you’re interested, we encourage you to swing by the Github Project Page and try it out.

Thanks for reading!

Oh wait! one more thing:

Contributions

We are full of ideas for various features for Deckard. We also get feature requests now and then. That’s why we would like to invite you to participate. Please share your ideas, make contributions, report bugs and give us feedback. Only together can we make this little project evolve!

Also, if you love agile product development like we do, have a look at our vacancies. ;)

--

--