In our previous blog, we introduced “why” we migrated the Kafka service at Walmart from the shared bare-metal machines to the new “self-serving” Kafka deployment that is powered by OpenStack and OneOps. Today, I would like to introduce how the Kafka ecosystem looks like “under-the-hood”.
The above picture does not completely capture the all real time pipelines but aims to highlight the key components and the relationships among them.
- Kafka Brokers: we are currently rolling out a Kafka version 0.10.1.0 with the suggested JVM parameters, to take advantage of better stability and reliability, comparing to 0.8 family.
- MirrorMaker: it is used for replicating certain topics from one Kafka cluster to another Kafka cluster, typically across data centers. One of our internal code patches is created to support complete topic renaming.
- Kafka REST proxy: an HTTP-based proxy to support non-Java clients. This is very useful to our existing systems when they may trade a little bit of throughput for convenience, e.g. without writing the Kafka clients in its own programming language.
- Kafka Connect: for storing Kafka data in permanent storage, such as HDFS. The benefits include: (1) Hadoop-based applications may now consume Kafka data, and (2) another level of protection for data loss.
Apache Kafka does not come with an UI, so we inherited the open-source Kafka Manager because of its neat UI design and well-covered operational features. One of the few places that we fell short is the “monitoring.” To remedy this, in our own fork of Kafka Manager, we added monitoring graphs for popular Kafka metrics in every page of Kafka Manager. For example, the broker-level page:
By clicking a monitoring graph, we could even visualize it across different historical moments. (e.g. last 1/2/4 hour, last day/week/month, even last year). For example,
The consumer lag, denoting how much the consumer is behind the producer, may also be visualized as this graph. For example:
For better performance, we recommend users deploy a Kafka cluster in one data center, but this will have an availability issue if the entire data center is down.
In fact, many of our Kafka use cases are sharing some commonalities:
- local producers: the data sources are independently located in 2–3 major data centers without duplication. E.g. Walmart.com is powered by many applications across more than one data center, and if we want to collect their logs , the sources are located in 2–3 places, but there are no duplicate logs.
- global consumers: the downstream applications want to have a full view of data from all data centers. E.g. Application Performance Monitoring (APM) tools need to get all logs in order to show the metrics and alert which application in which data center is going wrong.
To satisfy “local producers”, “global consumers” and High Availability (application-level, e.g. APM) across data centers, the deployment plan typically looks like:
From the above picture, the producer only produces to the local Kafka cluster, and the consumer in each data center listens to two (or even more) topics simultaneously: one is the “local” topic, the other one is the “mirrored” topic from the other cluster.
Note that if a data center goes down, a Kafka cluster and MirrorMaker are offline, the VIP of the application should be flipped to the working data center (if needed). Temporarily, the working consumer will only see the local messages, but the application relying on the consumer is still running.
When the data center comes back, the Kafka cluster will come back to work and MirrorMaker will start to replicate the data from the point when the data center was down.
We use jmxtrans as the bridge between JMX and the monitoring backend. Currently we support two monitoring backends: Graphite and Ganglia, and this is up to the users to choose one when they deploy Kafka.
- Ganglia: we use the unicast mode to minimize the network “chatter” because network is usually the “thin” pipe between hosts on the cloud.
- Graphite: for users who want to build beautiful dashboards by using Grafana on top of Graphite.
Stream Processing at Walmart
The stream processing domain has been flourishing with many open source systems. At Walmart, we primarily use Spark Streaming and Storm: Storm provides a lighting-fast stream processing infrastructure with huge scalability, while Spark has lambda architecture for both batch/ETL and streaming workloads. Both Spark and Storm nicely consume from Kafka, and their use case covers widely: A/B testing, email targeting, product inventory updates, and monitoring dashboards.
We plan to continue to invest efforts on Kafka core and its ecosystem. The plan is to set up Kafka in more and challenging production scenarios as long as the problems may be modeled and solved by a “streaming” concept.
Please find me on Twitter at: @NingZhang6