Comparing the ACID Properties of Databricks Delta Lake and Splice Machine

Splice Machine
Splice Machine
Published in
5 min readAug 21, 2019

In a previous blog, we discussed how two OLAP systems with ACID properties, HIVE LLAP and Snowflake, demonstrated poor throughput when tested on a standard transaction benchmark, TPC-C (or its closely related cousin, HTAP, which combines both the operational nature of TPC-C simultaneously with the analytical nature of the TPC-H benchmark in one).

Buyer Beware: ACID Compliance of Analytical Data Platforms May Not Be What You Expect

We next wanted to examine Databricks Delta Lake, “an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads”. Additionally, it has a number of other interesting capabilities, including Time Travel for enabling point-in-time query capabilities.

In our tests, we evaluated both the Databricks and open source versions (see https://github.com/delta-io/delta), and in the scope of these tests, the results did not vary significantly. All experiments were run on a single m4.large or equivalent machine (4 vCPU, 16 GB RAM).

Setup — Modified TPC-C — NewOrder Only

To perform this testing, we needed to modify our TPC-C simulator (see https://github.com/splicemachine/htap-benchmark) because at the time of testing only inserts were supported by the JDBC driver for Delta Lake (updates and deletes are expected soon). We chose that, for our test purposes, the inserts would be good enough for the TPC-C testing, since the NewOrder event types are the only ones that “count” toward the TpmC throughput and the activity within the NewOrder event types are inserts. This effectively neglects 2 minor update statements on the STOCK and DISTRICT tables within NewOrder. By ignoring those (and of course by turning off all of the other transaction types that should be going on in parallel that normally occur in the TPC-C benchmark), we are giving Delta Lake the benefit of the doubt on performance throughput by not having to take these steps by a factor of approximately 2x.

The TPC-C Schema

First Test Scenario — Simple TPC-C

In our initial testing, we performed standard JDBC inserts for the tests. Using JDBC inserts into a Delta Lake structure, we found that the TpmC for NewOrder was about 2. This is very similar to the results we measured for Hive LLAP and Snowflake, which was < 1. In comparison, Splice Machine achieves a count of 600 for NewOrder on similar hardware setup (and as high as 10,000 or more using modest-sized clusters as described in different testing here).

Second Test Scenario — Kafka Streaming

Next, we asked ourselves, if we could help Databricks Delta Lake’s transactional performance using streaming, might this not insert the data more quickly?

Streaming NewOrder event types is a little tricky because we are really receiving both Order and Order Line data as new orders are being added in the TPC-C simulation. So we created a new stream of denormalized data, combining all unique fields for both Order and Order Line data (joined by their primary keys), and modified our NewOrder TPC-C program to insert the data for both tables appropriately (and again performing inserts only). We, therefore, set up a single node producer and consumer model to achieve the simplest streaming throughput. In this model, each row represented a single row of the combined data — conditionally creating the NewOrder if it is not already created, and always creating each new Order Line, as indicated below:

Doing so sped up the streaming production of the denormalized data significantly. But on the consumer side, it still took a long time to create the Order and Order Line record insertions. As a result, we still were left only with a TpmC of 5 (280 New Orders in 60 minutes). It is to be noted that this produced around 2800 Order Lines in the process — but that is the nature of the TpmC benchmark. The TpmC is essentially determined by the number of NewOrders that are transacted per minute, even though there are many other transactions going on as part of that process.

Clearly, these results indicate that record creation on the consumer side is the bottleneck to the process. We could add more consumer nodes and get more throughput on the consumer side, but to only get 5 TpmC throughput on each consumer seems very low.

Third Test Scenario — Batch Writing

A final test we wanted to run was one that we knew Delta Lake would be strong with: batching the NewOrder inserts. In this scenario, we will hold on to the Order and Order Lines as they are received, only inserting them once the batch threshold is reached. Here we are getting further and further from the true TPC-C simulation where each Order (and Order Line) is transactionally added as it occurs in real-time (and as we performed in the previous scenarios). But since Delta Lake is designed for batch workloads, we wanted to test and publish these batching results for completeness.

When run this way, the results created a definite performance uptick: with a batch size of 16K, 210K orders were inserted in 36 minutes, resulting in an effective TpmC of 5833 in this mode.

Summarizing

In summary, we tested Databricks Delta Lake, an impressive system with numerous powerful features, including ACID transactions. But just because it has the “ACID” label doesn’t mean it can keep up with some of the well-known industry benchmarks that have been out there for some time now.

The TPC-C benchmark is intended to simulate a true order processing flow of transactions for hundreds or even thousands of order takers. This does not really apply to Delta Lake today. Stripping down Delta Lake to an insert-only, Kafka-based streaming and batching solution will finally get substantial performance in our testing, but at that point, it is not really capable of emulating a real-time transaction flow that the TPC-C benchmark is designed to capture.

If your workload fits a batch-style model for processing, Delta Lake will work well. But if your workload has real-time requirements, then you should seek a system that is engineered for OLTP transactions like a traditional RDBMS like Oracle Exadata (scale-up) or Splice Machine (scale-out).

--

--