Anatomy of an Event Streaming Platform — Part 2
This post in the series discusses the fundamental concepts and the architecture of event storage layer of an event streaming platform
In the previous post, I talked about an Event Streaming Platform (ESP) concept, why you should have an ESP in your organization, and the high-level architecture of a typical ESP.
In this post, I will explain the storage layer of an ESP in detail. We’ll discuss the architecture and fundamental concepts of it while taking examples from popular ESPs out there.
Responsibilities of the storage layer
The storage layer is responsible for storing the ingested events in a highly scalable and durable manner. Simply put, it ensures that not a single event ingested is lost due to the failures of the entire ESP.
The storage layer is designed to efficiently ingest millions of events from the producers and making them available to consumers after a milliseconds timespan. There are foundational concepts that are common to many ESPs out there to help achieve that.
Let’s explore them in the coming sections.
The grouping of events
A stream of events is a set of related events in motion. They share a common set of attributes. When the ESP ingests an event stream, it is materialized into a collection.
The storage layer puts related events into the same collection. A collection is similar to a folder in a filesystem, and the events are the files in that folder. An example collection could be “payments,” where it keeps all events coming from the payment stream.
Collections are always multi-producer and multi-subscriber: a collection can have zero, one, or many producers that write events to it, as well as zero, one, or many consumers that subscribe to these events.
The term collection is something that I came up with to refer to the event grouping inside the storage layer. In Kafka, a collection is called a topic. In AWS Kinesis Data Streams, a collection is mapped to a Stream, whereas Azure Event Hubs uses the term Event Hub.
Before you do any work with the ESP, you must define a collection to hold your events.
From now on, I’ll refer to stored events inside the storage layer as data records as our discussion is turning into a storage-related one.
Partitions
Before we discuss partitions, let’s have a closer look at how ESPs are made of.
Brokers and clusters
Generally, ESPs are distributed systems made of multiple servers, communicating over a network to achieve a common set of goals. These servers add scalability, computation power, and storage to the ESP. Although I’m not too sure about AWS Kinesis and Azure Event Hubs internals, I believe they also follow the same principles when building their infrastructure.
In Kafka’s terminology, each server is called a broker, and brokers make up a Kafka cluster.
Partitions/shards
If we go back to our original discussion, a collection can be further subdivided into partitions.
While the collection is a logical concept, a partition is the smallest storage unit that holds a subset of records owned by a collection. Each partition is a single log file where records are written to it in an append-only fashion.
While Kafka and Event Hubs use the same term to refer to partitions, Kinesis uses the term “Shard.” In Kinesis, the data records in a data stream are distributed into shards.
The following illustrates how an Azure Event Hub is divided into partitions.
Further to that, Azure Event Hubs documentation says:
A partition can be thought of as a “commit log”. Partitions hold event data that contains body of the event, a user-defined property bag describing the event, metadata such as its offset in the partition, its number in the stream sequence, and service-side timestamp at which it was accepted.
Why partitioning?
A collection is spread over several “buckets” located on different brokers. This distributed placement of data enables scalability as it allows client applications to both read and write the data from/to many brokers at the same time.
By doing so, we’ll get the following benefits.
- If we are to put all partitions of a topic in a single broker, the scalability of that topic will be constrained by the broker’s IO throughput. A topic will never get bigger than the biggest machine in the cluster. By spreading partitions across multiple brokers, a single topic can be scaled horizontally to provide performance far beyond a single broker’s ability.
- Multiple consumers can consume a single topic in parallel. Serving all partitions from a single broker limits the number of consumers it can support. Partitions on multiple brokers enable more consumers.
- Multiple instances of the same consumer can connect to partitions on different brokers, allowing very high message processing throughput. Each consumer instance will be served by one partition, ensuring that each record has a clear processing owner.
Offsets
Each record in a partition is assigned a sequential identifier called the offset. The offset is unique for each record within the partition.
The offset is an incremental and immutable number maintained by the storage layer. When a record is written to a partition, it is appended to the end of the log, assigning the next sequential offset. Offsets are particularly useful for consumers when reading records from a partition. We’ll come to that at a later section.
Data retention policy
Unlike traditional message-oriented middleware systems, an ESP doesn’t delete data records after consumers consume them. Instead, it allows you to keep them as long as you want.
But ESPs offer you a time-based expiry for messages stored in collections. With retention period properties in place, messages have a TTL (time to live). Upon expiry, messages are marked for deletion, thereby freeing up the disk space.
Usually, you can configure retention policies at the global level or the collection/topic level.
By default, Kafka allows you to keep your messages up to 7 days. Event Hubs Standard tier currently supports a maximum retention period of seven days. AWS Kinesis offers a default retention period of up to 24 hours.
Where to next?
This post discussed the foundational concepts of the storage layer. In the next post, let’s discuss how to write events into the storage layer and read them back.
Stay tuned!
References
https://kafka.apache.org/intro
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features
https://docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-streams.html