Google Pub/Sub Lite for Kafka Users

Daniel Collins
Jan 25 · 6 min read

Pub/Sub Lite is a new service from the Cloud Pub/Sub team which aims to provide an alternative, cost-focused Pub/Sub system. In particular, it provides a managed service for users who would consider running their own single-zone Apache Kafka cluster for price reasons. This post presents a comparison between Pub/Sub Lite, Pub/Sub and a self-managed Kafka configuration, as well as a walkthrough of how to try out your current Kafka workloads on Pub/Sub Lite.

Pub/Sub Lite shares more high-level concepts with Kafka than Cloud Pub/Sub does, being a partitioned log with progress tracked through advancing offsets. Because of this, it has a more similar API to Kafka, and users can publish and consume messages using the Kafka client APIs.

Notable differences between Pub/Sub Lite and Kafka

While Pub/Sub Lite is conceptually similar to Apache Kafka, it is a different system with APIs more focused on data ingestion. While the differences should be immaterial for stream ingestion and processing, there are a number of specific use cases where they are important.

Kafka as a database

Pub/Sub Lite does not support transactional publishing or log compaction, which are Kafka features that are generally more useful when you use Kafka as a database than as a messaging system. If this describes your use case, you should consider running your own Kafka cluster or using a managed Kafka solution like Confluent Cloud. If neither of these are an option, you can also consider using a horizontally scalable database like Cloud Spanner, and having a table ordered by commit timestamp using row keys for deduplication.

Compatibility with Kafka Streams

Kafka streams is a data processing system built on top of Kafka. While it does allow injection of consumer clients, it requires access to all admin operations and uses Kafka’s transactional database properties for storing internal metadata. Apache Beam is a similar streaming data processing system which is integrated with Kafka, Pub/Sub and Pub/Sub Lite, as well as other data sources and sinks. Beam pipelines can also be run in a fully managed way with Dataflow.

Monitoring

Kafka clients have the ability to read server-side metrics. In Pub/Sub Lite, metrics relevant to publisher and subscriber behavior are exposed through Cloud Monitoring with no additional configuration.

Administration and Configuration

Capacity Management

The capacity of a Kafka topic is generally determined by the capacity of the cluster. Replication, key compaction, and batching settings will determine the capacity required to service any given topic on the Kafka cluster, but there are no direct throughput limits on a topic itself. By contrast, both storage and throughput capacity for a Pub/Sub Lite topic must be defined explicitly. Pub/Sub Lite topic capacity is determined by the number of partitions and adjustable read, write and storage capacity of each partition.

Authentication and Security

Apache Kafka supports several open authentication and encryption mechanisms. With Pub/Sub Lite, authentication is based on GCP’s IAM system. Security is assured through encryption at rest and in transit.

Configuration options

Kafka has a large number of configuration options that control topic structure, limits and broker properties. Some common ones useful for data ingestion are presented below, with their equivalents in Pub/Sub Lite. Note that as a managed system, the user has no need to be concerned with many broker properties.

auto.create.topics.enable

No equivalent is available. Topics should be created beforehand using the admin API. Similarly, subscriptions (roughly equivalent to consumer groups) must be created before being used with the admin API.

retention.bytes

The equivalent in Pub/Sub Lite is Storage per partition, a per-topic property.

retention.ms

The equivalent in Pub/Sub Lite is Message retention period, a per-topic property.

flush.ms, acks

These are not configurable, but publishes will not be acknowledged until they are guaranteed to be persisted to replicated storage.

max.message.bytes

This is not configurable, 3.5 MiB is the maximum message size that can be sent to Pub/Sub Lite. Message sizes are calculated in a repeatable manner.

key.serializer, value.serializer, key.deserializer, value.deserializer

Pub/Sub Lite specifically implements and . Any serialization (which can possibly fail) should be performed by user code.

retries

Pub/Sub Lite uses a streaming wire protocol and will retry transient publish errors such as unavailability indefinitely. Any failures which reach end-user code are permanent.

batch.size

Batching settings are configurable at client creation time.

message.timestamp.type

When using the Consumer implementation, the event timestamp will be chosen if present, and the publish timestamp used otherwise. Both publish and event timestamps are available when using Dataflow.

max.partition.fetch.bytes, max.poll.records

Flow control settings are configurable at client creation time.

enable.auto.commit

Autocommit is configurable at client creation time.

enable.idempotence

This is not currently supported.

auto.offset.reset

This is not currently supported.

Getting Started with Pub/Sub Lite

Pub/Sub Lite tooling makes it easy to try out current Kafka workloads running on Pub/Sub Lite. If you have multiple tasks in a consumer group reading from a multi-producer Kafka topic, adapting your code to run with Pub/Sub Lite requires minimal changes. These are outlined below.

Create Pub/Sub Lite resources

To ingest and process data with Pub/Sub Lite, you need to create a topic and subscription respectively. You should ensure when creating your topic that it has enough horizontal parallelism (partitions) to handle your peak publish load. If your peak publishing throughput is X MiB/s, you should provision X/4 partitions for your topic with 4 MiB/s of capacity each (the default).

Copy data from Kafka

A Kafka Connect connector for Pub/Sub Lite is maintained by the Pub/Sub team, and is the easiest way to copy data to Pub/Sub Lite. For experimentation, you can specifically run the script, which will download and run Kafka Connect locally in a single machine configuration. Ensure that you follow the Pre-Running steps to properly configure authentication before starting. An example file would look like this:

name=PubSubLiteSourceConnectorconnector.class=com.google.pubsublite.kafka.source.PubSubLiteSourceConnectorpubsublite.project=my-projectpubsublite.location=europe-south7-qpubsublite.subscription=my-subscriptionkafka.topic=my-kafka-topic

This will mirror all data that is published to your kafka topic to Pub/Sub Lite while it is running. The Kafka Connect documentation provides more information on how to run a Kafka Connect job for your cluster.

Once you start copying data, you should be able to see the Pub/Sub Lite topic’s topic/publish_message_count metric growing in the metrics explorer console, as the backlog of your Kafka topic is copied over.

Read data from Pub/Sub Lite

The Pub/Sub team maintains a Kafka Consumer API implementation that allows you to read data from Pub/Sub Lite with only minimal modifications to your existing code.

To do so, you will replace all instances of with a Pub/Sub Lite-specific implementation of the same interface. First, you must ensure that no client code references the concrete implementation — instead, you should replace them with the interface. Next, you should construct your Pub/Sub Lite Consumer implementation as detailed in the link above, and pass it through to your code.

When you call , you will now be retrieving messages from Pub/Sub Lite instead of Kafka. Note that the Pub/Sub Lite will not automatically create a subscription for you: you must create a subscription beforehand using either the UI or gcloud.

As you receive messages and commit offsets, you can monitor the progress of your through the backlog by looking at the subscription/backlog_message_count metric in the metrics explorer console.

Write data to Pub/Sub Lite

Once all have been migrated to reading data from Pub/Sub Lite, you can begin migrating to Pub/Sub Lite. Similarly to the case, you can replace all users of with as a no-op change. Then, following the instructions, you can construct a Pub/Sub Lite Producer implementation and pass it to your code. When you call , data will be sent to Pub/Sub Lite instead. When you update your producer jobs, your consumers reading from Pub/Sub Lite will be ambivalent whether the data is sent through Kafka (and copied to Pub/Sub Lite by Kafka Connect) or to Pub/Sub Lite directly. It is not an issue to have Kafka and Pub/Sub Lite producers running at the same time.

Google Cloud - Community

Google Cloud community articles and blogs

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store