Introducing Pinterest Secor

Pinterest Engineering
4 min readApr 29, 2014

--

Pawel Garbacki | Pinterest engineer, Data

As we build products to eventually power Promoted Pins, it’s vital to maintain a no-fail reliable data infrastructure. Today we’re open sourcing Secor, a zero data loss log persistence service whose initial use case was to save logs produced by our monetization pipeline.

Secor persists Kafka logs to long-term storage such as Amazon S3. It’s not affected by S3’s weak eventual consistency model, incurs no data loss, scales horizontally, and optionally partitions data based on date.

Big data at Pinterest

Pinterest is a data-driven company, and at any point in time we track thousands of metrics derived from hundreds of log types. We collect petabytes of data and add tens of terabytes per day.

There are hundreds of Hadoop jobs slicing the data across multiple dimensions to produce reports that track our business metrics and generate derived aggregates that feed into our serving infrastructure.

Pinterest logging pipeline

Our data logging center of gravity is a Kafka cluster. Kafka introduces abstractions that simplify collecting logs, but is only capable of streaming data to local disks and therefore isn’t suitable as a long-term data store.

Logs are stored on Amazon S3, and while it’s a highly reliable and scalable storage solution, S3 comes with the possibility of eventual consistency, meaning there are no guarantees for when uploaded files will become visible to readers. S3 also has non-monotonic properties that may cause files to “disappear” and reappear moments later.

A service you can rely on

Project Secor was born from the need to persist messages logged to Kafka to S3 for long-term storage. Data lost or corrupted at this stage isn’t recoverable so the greatest design objective for Secor is data integrity.

Mechanisms built into Secor assure that as long as Kafka won’t drop messages before Secor can extract them (such as an aggressive retention policy), every single message will persist on S3.

Offset lag tracking health metrics

No-reads principle

Secor works around the limitations of the eventual consistency model by adhering to a principle that it never reads back anything it wrote to S3. It relies on Kafka consumer offset management protocol to keep track of what’s been uploaded to S3. Kafka stores the underlying metadata in ZooKeeper, while metadata commit points are controlled by Secor and they occur with a very low frequency of roughly one update per Kafka partition per hour.

The fact that metadata is stored separately from the data introduces a potential complication of keeping two stores in sync. Secor addresses this issue by enforcing that data is updated before the metadata and by using deterministic S3 paths. Any inconsistency caused by a successful update of the data followed by a failed commit to the metadata store will auto-resolve itself during subsequent state updates.

The Benefits of Secor

In addition to guaranteeing data integrity, Secor comes with a number of functional features:

  • Load Distribution: It can be distributed across multiple machines.
  • Horizontal Scalability: Scaling the system out to handle more load is as easy as starting extra processes. Reducing the resource footprint can be achieved by killing any of the running processes. Neither ramping up nor down has any impact on data consistency.
  • Output Partitioning: It parses incoming messages and puts them under partitioned S3 paths to enable direct import into systems like Hive.
  • Configurable Upload Policies: Commit points controlling when data is persisted in S3 are configured through size-based and time-based policies (e.g., upload data when local buffer reaches size of 100MB and at least once per hour).
  • Monitoring: Metrics tracking various performance properties are exposed through Ostrich and optionally exported to OpenTSDB.
  • Customizability: External log message parser may be loaded by updating the configuration.
  • Qubole Interface: It connects to Qubole to add finalized output partitions to Hive tables.

Try it out with us!

The architecture of Secor is flexible enough to fit into various environments and its code has been hardened by production use since the initial rollout in February 2014. You can now access the Secor source code and design details for your own use. If you have any questions or comments, you can reach us at secor-users@googlegroups.com.

If you’re interested in tackling big data challenges like this, join us!

Pawel Garbacki is a software engineer at Pinterest.

--

--