EXPEDIA GROUP TECHNOLOGY — DATA

Kafka: Schema Registry PEM authentication

Streamlining Kafka client security using the power of open-source software

Elliot West
Expedia Group Technology

--

Photo by George Becker from Pexels

Problem

Security is a principal concern of Expedia Group’s Stream Platform — we aim to secure all of our engineer’s stream resources. Obviously, this starts with the Kafka topics that persist their events — but once you scratch the surface you find many other resource types that are involved in typical Kafka usage and that you probably want to secure. For example, you probably don’t want another user to commit the offsets of your group.id, re-use your client.id or KStreamsapplication.id, or make changes to the schemas that describe the structure of your events.

From a user’s perspective, this can get complicated rather quickly. So we’ve implemented a stream control plane that — among other things — is responsible for translating simple user intents into resource and security specifications, and applying those to a number of components in our platform. Target components include: Kafka, Ranger, and the Confluent Schema Registry. For example, if a user creates a new stream producer, we’ll use that intent to automatically find capacity on a Kafka cluster, create a topic, generate client certificates, generate broker access policies, and also schema access policies — there are a lot of moving parts in the Kafka ecosystem!

Simplified stream producer registration workflow

Solution

Recently we rolled out a new security feature to our Schema Registry instances so that users would be able to safely write directly to the registry. To align with our broad use of Apache Ranger we chose to eschew the Confluent provided RBAC and instead implemented our own Ranger service and plugin. Fortunately, while the documentation for performing such an integration was sparse, Ranger is an open-source project and so it was possible to get the help we needed via their mailing lists and by browsing through the source code.

For authentication, we’d previously selected mTLS + PEM for client-to-broker communications and wanted to take advantage of the same scheme with client-to-registry communications so that a client could use the same set of credentials to access both systems. Previously we’d used Java Key Store based certificates but these are clumsy to use and poorly supported by non-JVM languages. They also create a hard requirement on the customer client supplying a file-system to persist the key store — an extra thing for our users to do. A PEM approach has the advantage of encoding the certificates in a simple string form that is passed into your producer and consumer configuration maps; this works really well with our stream discovery service — a system that allows Kafka clients to resolve their configurations via a REST API (see picture below).

User client discovery interactions with the stream platform

Unfortunately, it turned out that while mTLS+PEM authentication is now fully supported within Apache Kafka, such support was not added to the Confluent Schema registry — which was stuck on mTLS+JKS. This created a messy situation — our Kafka clients would have the inconvenience of using both JKS and PEM to interoperate with our streaming platform; JKS for the schema registry (all streams in our platform must have a schema) and PEM for the broker connections.

We set out to remedy this by adding PEM support to the open-source Confluent Schema registry project. We’ve implemented a simple patch that enables the use of either JKS or PEM by registry clients so there is parity with the authentication schemas offered by the Kafka brokers. It works by providing a separate set of configuration options that accept PEM configuration and then using these to configure the existing certificate handling code, previously used exclusively for JKS. I’m happy to report that our pull request has been merged, and we’re looking forward to it hitting a GA release so that we can stop shipping custom-built SerDes to our users.

Thanks to the open-source nature of two key projects, we’re now in a good place where user’s clients can create zero-configuration Kafka clients. They simply tell our platform’s discovery API who they are with a simple authorised REST request over HTTPS, and our platform will determine the topics that the client should access and provide them with a full set of configurations for their client — certificates included. The user never has to concern themselves with producer/consumer configuration maps or certificates.

--

--