Managing Billions of Data Points: Evolution of Workflow Management at Groupon

May 28 · 4 min read


At Groupon’s scale, over 800 microservices serve more than 50M customers by offering products, activities, travel, and services. These services generate more than hundreds of terabytes of data on a daily basis. In a quest to improve our customers’ 360-degree experience, our engineering and data science teams leverage this data for marketing analytics and predictive model development every day.

Operating a complex marketplace like ours requires crunching big data so that our business can take advantage of the patterns observed on our platform. And since big data pipelines don’t run by themselves, we utilize Apache Airflow to schedule, monitor, and orchestrate them.

Hunt for the best Workflow Management System

Today at Groupon, we rely heavily on Airflow as our primary workflow automation tool for multiple systems. But prior to 2018, we used Cron to schedule our jobs, and job failure alerts due to upstream unavailability was common.

While the Cron solution was suitable earlier, its limitations became pronounced and more chatty as our systems grew. Some prominent shortcomings we noticed were:

  • Ad hoc retriggering of the job in the event of failure

Speaking of numbers, our Consumer Data Authority Platform (the platform that governs everything we know about our consumers) at that time had 100+ jobs and 35+ workflows. With that scale, it was nearly impossible for an on-call engineer to manage the system in the event of failure. At this time, we started looking for alternative open-source solutions.

Placing our bet on Airflow

As part of the CDAP Team, we carried out proof of concept on some of the best solutions available in the market and evaluated the below-listed KPIs to best suit our use case.
In the comparison chart below, we have listed the KPIs relevant to our team and the product we were developing.

KPI Comparison for Workflow Automation Tools

After performing the exercise, it was evident that Apache Airflow was the right contender for our use case. Being battle-tested at big companies and having a rich open source community made it a compelling choice.

Airflow Architecture at Groupon

Apache Airflow currently provides five types of executors for production deployment.

  1. Sequential Executor — It is a single process executor that runs one task instance at a time.

Easing the monitoring and management of our Apache Spark jobs was the primary reason for migration to Airflow. These jobs run in a Hadoop cluster having 550 nodes and 28 PB of storage. Running an average of 3–4 hours, these jobs process multiple TBs of data daily. We proceeded with Local Executor-based architecture since Spark pipelines did most of the heavy lifting. Our architecture is multi-tenant, and all the services in our team use the same deployment.

Single Node Airflow Architecture

Takeaways from this migration

Airflow helped us optimize the entire CDAP data pipeline without worrying about the job dependencies. It played a vital role in breaking our monolithic codebase into a modularized codebase. It also assisted us to integrate multiple independent in-house applications like Data Quality Framework, Email Alerting, SLA, Daily Health Status jobs. Overall our prominent achievements in migration were:

  • Reduction in code duplication and LOC.

Additional Credits: Muthukumar Lakshmanan, Saurabh Santhosh, Srinivas A and Soupam Mandal.

Groupon Product and Engineering

All things technology from Groupon staff