Does Apache Kafka do ACID transactions?

Andrew Schofield
6 min readNov 5, 2018

--

I spend a lot of time explaining the differences between message queuing and event streaming systems. The biggest difference between message queuing systems such as IBM MQ and event streaming systems such as Apache Kafka is the idea of a stream history. Essentially, in an event streaming system, the historical events on an event stream are not immediately removed when they’re consumed. They stay around.

There’s another major difference too and that’s to do with transactions. Transactions are fundamentally a way to maintain consistency across resources. In a transactional system, the hard logic to keep things consistent is part of the infrastructure and not the applications. The application performs its work in the scope of a transaction and then commits the transaction, safe in the knowledge that either all or none of the transaction’s effects happen.

Just as a topic in MQ is not quite the same as a topic in Kafka, a transaction in MQ is not quite the same as a transaction in Kafka.

Kafka does have a feature called exactly-once semantics but it offers a much weaker guarantee than proper transactions. It looks like transactions at the API level, but if you look inside, it’s not the same. There are plenty of situations in which the Kafka guarantee is sufficient, but if you’re used to proper ACID transactions (I’ll explain that later), I would take the time to understand the difference.

Messaging and transactions in practice

Let’s look at some examples. The most basic example looks like this:

  • Begin transaction
  • Consume message from topic T1
  • Produce message to topic T2
  • Commit transaction

It simply moves a message from topic T1 to topic T2. For the duration of the transaction, the effects of the messaging operations are not permanent, but when it commits, they both become permanent. If the transaction fails, the operations are both undone.

A more complicated example involves two different resource managers, and I’ll illustrate using a messaging system and a relational database. The messaging system is being used to move data safely from one database to another. The first transaction involving the source database and the messaging system looks like this:

  • Begin transaction
  • Read row from source database
  • Produce message containing row data to topic T
  • Delete row from source database
  • Commit transaction

Then a second transaction involving the target database and the messaging system looks like this:

  • Begin transaction
  • Consume message containing row data from topic T
  • Insert row into target database
  • Commit transaction

For the period between the two transactions, the data that was in the database is actually only within the messaging system. There’s a precise, one-to-one relationship between the row in the database and the message. The key here is that, in both transactions, the database and messaging system are being coordinated so that they commit together. This is an example of a distributed transaction and it uses a technique called two-phase commit.

It’s entirely reasonable to ask at this point why anyone would build a system based on distributed transactions and two-phase commit. Surely, that’s an anti-pattern. That’s not going to scale, right? Well, possibly, but there are many business applications in existence which make broad use of transactions involving MQ and databases because the application logic is so straightforward. Regular application teams can achieve the magical feat of moving data between systems, potentially across large distances, without loss or duplication.

IBM MQ can achieve both of these example with ease. Apache Kafka can only do the first easily. If you’re a total expert, you could also achieve the second with some extremely carefully written application code ensuring in all cases and failure modes that there is no data loss and no duplication. This is not at all trivial and I’ve seen people try and fail to get this right.

So, my point is that it’s technically possible to do it with Kafka, but it’s adding complexity to the application.

Messaging and the ACID properties

Transactional systems implement the four ACID properties: Atomicity, Consistency, Isolation, and Durability. The definitions of those really apply to databases but the overall ideas can be applied to messaging systems too. It goes something like this:

  • A transaction behaves as a single atomic unit which either succeeds completely or fails completely
  • All effects of a transaction are all made visible to all observers at the same time
  • Once a transaction has committed, it remains committed even in the event of a system failure

In IBM MQ, there’s a single recovery log for each queue manager and log records for all of the messaging operations and transactions are appended to it. The log is written synchronously to disk at critical points which is relatively slow but it pays dividends in terms of data integrity. Once the log record that represents the transaction committing is written to the log, you know that the transaction is properly atomic and durable.

In Apache Kafka, the exactly-once-semantics APIs are a powerful tool for stream processing applications, but the transactional guarantees are relatively weak. If a transaction uses two different partitions, the leader for each partition is responsible for recording the operations into its own log. There’s also an internal topic used to record the overall transaction state. So, the durable state of the transaction is spread across multiple logs and potentially multiple servers. If you examine the design of transaction commit in Kafka, it looks a little like a two-phase commit with a prepare-to-commit control message on the transaction state topic, then commit markers on real topics, and finally a commit control message on the transaction state topic. It’s clever but it’s more fragile. Then you factor in the way that Kafka writes to its log asynchronously and you can see that what Kafka considers to be a committed transaction is not really atomic at all.

Under normal operation, it will all work fine, but it doesn’t take much imagination to think of a failure that can break ACID. For example, if a partition is under-replicated and the leader suffers an unexpected power outage, a little data loss could occur and this breaks the integrity of the transaction. A correlated hard failure such as a power outage that affects all brokers could even result in commit/abort markers becoming lost in all replicas. You deploy Kafka in such a way as to minimise and hopefully eliminate these kinds of problems, but there’s still an element of asynchronous durability in the mix.

This becomes particularly important if there are other resources such as databases being coordinated with the messaging system. We need the level of transaction guarantee from both systems to match. The consistency and durability guarantees must apply equally for all resources. If one participant in a transaction is a bit forgetful after a failure, the transaction integrity is lost. This is why writing to the log synchronously is such a big deal when coordinating with other resources managers; it makes it clear what level of guarantee is being provided and that makes it easy to match it on all systems.

Wrapping up

Now you understand the difference between ACID transactions and Kafka exactly-once-semantics. You cannot just pick up a business application that uses transactions and get exactly the same results with Apache Kafka without looking carefully at the existing code, thinking about what fundamental guarantees you need for the different pieces, and designing its replacement very carefully. You can definitely build proper, rock-solid business applications, but the code that you write is likely to be very different. For example, you might choose to permit occasional message duplication so that you can safely retry, and that probably brings with it idempotent processing of the messages. Not exactly rocket science, but different. For stream processing applications using the Kafka Streams API, exactly-once semantics are in their sweet spot and make a lot of sense.

So, does Apache Kafka do ACID transactions? Absolutely not. No way. Can you get a similar effect? If you design your applications in the right way, yes. Does it matter? In many cases, not really, but when it does, you absolutely don’t want to get it wrong. Just take the time to understand the guarantees that you need to make your system reliable and choose accordingly.

--

--

Andrew Schofield

Software engineer and architect at Confluent. Messaging expert. Apache Kafka contributor. My words and opinions are my own.