Why You Need To Upgrade Your Apache Kafka Client Immediately

Gilles Philippart
Confluent
Published in
5 min readAug 9, 2024

Let me break the news to you: If your streaming application is using Apache Kafka® client version 2.0, you’re missing out on 63 new features, 860 improvements and 1525 bug fixes as compared to the recently released Apache Kafka 3.8.

For your convenience, I’ve compiled the complete Kafka release history starting from version 2.0 at the end of this post (I might include versions down to 0.8.0 if there’s enough interest). Check it out to see what you might be missing out on, based on the version you’re currently using. Hopefully, this will help convince your boss that an upgrade is overdue!

Every engineer knows the importance of staying current with software updates. Unfortunately, many developers find themselves stuck on outdated versions of Kafka clients, often for no good reason.

If you’ve been paying attention, you’ve probably noticed that the numbers I mentioned earlier cover both broker and client changes. The Kafka client, in particular, is built to handle a bunch of complex tasks like message serialization and deserialization, retries and failovers, offset tracking, batching and compression, and schema enforcement. In fact, when you include the Streams API, the Kafka client actually has more lines of code than the server (thanks Stanislav Kozlovski for the breakdown)! So, it’s no surprise that each new release often comes with important client-side updates.

With that in mind, let’s dive into the five key reasons to update your applications and services to the latest version of the Kafka client:

Greater support and compatibility

“Hey Boss, remember that client API from 2018? No? Well, neither does Kafka!”

Using outdated Kafka clients can lead to significant limitations. Older versions are often no longer supported, meaning any bugs or issues will not be addressed. While KIP-35 allowed older Kafka clients to communicate with more recent Kafka brokers, these old clients miss out on the latest features and improvements available in recent versions.

Sometimes, outdated APIs are deprecated and then removed after a few releases to facilitate evolution. For instance, client APIs released before Apache Kafka 2.1 have been deprecated since version 3.7, and will be removed in version 4.0. Likewise, ZooKeeper, marked as deprecated since the 3.5 release, will also be removed in Kafka 4.0.

Make sure to update your Kafka clients regularly to avoid the hassle and risks of rushing an upgrade at the last minute.

Bug fixes and security patches

“Remember that annoying bug that caused data loss? Yeah, the new version fixed it… last year.”

Newer versions of Kafka clients include important bug fixes and security patches. Running outdated versions can expose your system to known vulnerabilities and operational issues that have been resolved in later releases.

For instance, this gnarly issue caused consumers to be unable to reconnect to the group coordinator after a commitOffsetsAsync exception. It affected multiple versions starting from 2.6.1, and was eventually fixed in version 3.2.1. (Interestingly though, the release notes for this version don’t mention the bug fix).

As engineers, staying on top of vulnerabilities is a critical responsibility. The Apache Kafka project maintains a CVE list, which should be a strong incentive to upgrade for anyone serious about security. You definitely don’t want to be the one sitting in the fire like the dog in the image above!

By updating your Kafka clients frequently, you protect your system against potential security threats and ensure smoother operation.

Performance improvements

“Sure, we could cut processing time by 30%, but hey, if it’s not broken don’t fix it, right?”

Major or minor Kafka releases often include performance enhancements, either on the broker side, the client side, or both.

Kafka 2.4 introduced the Kafka Consumer Incremental Rebalance Protocol (KIP-429), which offers better rebalance behavior for failed members, and reduces unnecessary downtime due to partition migration.

In Kafka 3.3, the strictly uniform sticky partitioner (KIP-794) improved the default partitioner to distribute non-keyed data evenly in batches among healthy brokers, and less data to unhealthy brokers. As a result, the p99 latency for a producer workload with abnormal behavior was reduced from 11s to 154ms!

Another performance improvement was implemented in Kafka 3.7 with leader discovery optimisation for the client (KIP-951), which minimizes the time taken to discover a new leader. This change boosts the efficiency and speed of Kafka workloads during partition leader changes.

As a side note, the next-gen consumer rebalance protocol (KIP-848) is now available for preview in Kafka 3.8. It provides the same guarantees as the current protocol, but is better, more efficient, and no longer relies on a global synchronization barrier. While it is not ready for production use yet, you can start testing it in non-prod environments to get prepared for its general availability in the next release.

As you can see, staying up-to-date ensures that your system runs at peak performance.

New features

Each Kafka update also regularly introduces new features, either adding functionality or enhancing the developer experience.

For instance, improvements like KIP-714, which enhances client metrics and observability in Kafka 3.7, and exactly-once support for Kafka Connect source connectors (KIP-618), which was added in Kafka 3.3, can save you considerable time and effort in implementing and monitoring data streaming systems.

If you’re a Kafka Streams user, joining and enriching data streams became much easier with the addition of the streaming foreign-key join feature in Kafka 2.4.

And recently, versioned states stores (KIP-889) were introduced in Kafka 3.5 to improve the accuracy of joins when processing out-of-order records.

Long-term maintenance

If you’re a customer, Confluent provides support for two to three years (depending on the support level) after the initial release of a minor Kafka version. By updating to newer versions, you ensure that your Kafka clients remain within the maintenance window, receiving necessary updates and support.

Surely, everyone enjoys a bit of excitement, but it’s always best when it‘s not putting your job at risk!

Conclusion

Sticking with outdated software is like driving a car with old, worn-out tires. You may still get to your destination, but not as smoothly or safely as you could with new ones.

If you haven’t upgraded for a long time, you should definitely feel the FOMO, and realize that there’s much you’re missing out on.

Embrace the benefits of updating your apps to the latest version of the Kafka client now:

  • Greater compatibility and support
  • Critical bug fixes and security patches
  • Improved performance
  • Exciting new features
  • Reliable long-term maintenance

Kafka release history

(Scroll the table to the right to see additional columns)

--

--

Gilles Philippart
Confluent

Engineer turned content creator about data streaming. I work for Confluent.