How Razorpay uses Druid for seamless analytics and product insights?

Authors: Birendra Kumar Sahu & Tanmay Movva (Data Engineering Team)


At Razorpay, we have billions of data points coming in the form of events and entities on a daily basis. To ensure and enable, that we operate using data at its core, enabling data democratization has become essential. This means that we need to have systems in place that can capture, enrich and also disseminate the data, in the right fashion, to the right stakeholders with high speed.

However, we had some unavoidable challenges:

Performance — Our raw data resides in S3 and external tables are created on top of this data using Presto on Hive. Analysts currently schedule presto queries to generate aggregate tables and use them to improve their query performances for adhoc needs, report generation, dashboards, etc,. Even though these aggregate tables improve their experience compared, the average query response times were still high and that has a direct impact on their productivity. So the system we adopt has to help us achieve interactive analytics and also decrease the overhead of scheduling presto queries by our users.

Druid at Razorpay


Druid supports both batch and streaming ingestion from various input sources. In our case S3 and Kafka are input sources for batch and streaming ingestion respectively.


Druid supports two query languages — Druid SQL and native Json. Druid SQL is a built-in sql layer which uses Apache Calcite to translate sql queries to native json queries. Our internal micro-services submit queries to the druid router over https using native query language. Our BI tool looker uses Avatica JDBC driver to make druid sql queries to the router process.

Cluster Tuning

Basic cluster tuning guide provided in the official documentation was very helpful for us to tune the druid cluster. While tuning druid for performance on scale, we need to be mindful of the type of queries that are being made to the druid cluster. Managing merge buffers and number of processing threads in historical and middle manager processes is key to achieve desired query response times and multi tenancy. Number of group by queries at any given time is limited by the number of merge buffers available in the particular process to which the query is made. Number of processing threads in a process(Historical and MiddleManager) limits the total number of segments that can be processed in parallel, while achieving multi-tenancy it is important to have an idea on how many segments do we need to be processed in parallel as less number of processing threads leads to higher response times and over provisioning processing threads means throwing more cores into the cluster which leads to unnecessary costs.

Challenges of using Druid

  • Low latency updates would be a problem since druid updates via batch jobs only because of the custom storage format druid uses to achieve sub second query response times.
  • Druid doesn’t facilitate joins and has minimal support for complex data types i.e. arrays and records. Hence, a lot of effort needs to be put in modelling the data into a flattened structure.
  • Since data is pre-aggregated on ingestion, data granularity below the mentioned level cannot be achieved (ex: if granularity is hourly, we will lose minute level accuracy and can only access it aggregates). Deciding on the right balance between required roll up ratio and granularity can become challenging.


Druid is a big data analytics engine designed for scalability and performance. Its well factored architecture allows easy management and scaling of the Druid cluster and its optimised storage format enables low latency analytics queries. We have successfully deployed Druid at Razorpay for our use cases and see continued growth in its footprint. We were able to achieve p90, p95 values of less than 5s and 10s respectively and our dashboard performances have improved by at least 10x when compared to presto. We continue to work towards increasing adoption of druid to improve analytics experience within Razorpay.

Future Enhancement

  • Middle Manager Autoscaling — To provision workers based on the number of pending/active index tasks. This would help us utilise the cluster resources optimally and save a few bucks.
  • Automate ingestion spec generation and enable platform users to develop the ingestion spec themselves.


Head of Data Engineering @ Razorpay