What they don’t tell you about event sourcing

Hugo Rocha
Aug 12, 2018 · 10 min read
Image for post
Image for post

Event sourcing and CQRS gained a lot of popularity recently. The advantages are obvious and they share a very peculiar symbiosis with the current tech state of the art, making them very relevant. However, after working for several years with them in production there are several caveats that one should care for.

If you are not familiarized with event sourcing it comes down to instead of saving the application’s last state, to model it in the form of a sequence of immutable events. Changes to the state are reflected by saving the event that triggers that change instead of changing the current state. Processing each event in the stream will produce the latest state of that entity. You can find a detailed explanation by Martin Fowler here.

At first glance, it seems a terrible idea. Since each entity is represented by a stream of events there is no way to reliably query the data. I have yet to meet an application that doesn’t require some sort of querying. This is especially evident in data-intensive applications where much of the business value relies on analyzing the data. This difficulty by itself makes event sourcing unsuitable for most applications and only relevant for very isolated and specific use cases.

That’s when CQRS comes in. CQRS (command query responsibility segregation) describes the concept of having two different models one to change information and one to read it, completely separated from each other. Asking a question shouldn’t change the response. Martin Fowler has a very interesting article about it here.

Greg Young identified quite well how CQRS and event sourcing shares a symbiotic relationship. The limitation I talked earlier elegantly goes away applying event sourcing with CQRS. Having the write model separated from the read model enables the use of the most appropriate strategy for each model and allows both the write and read model to be scalable independently. Event sourcing is a particularly efficient write model since it works basically as an append log where new information is always added enabling minimal locking. Since each event is irremovable and immutable no updates or deletes are needed enabling good write performance. On the other hand, since the read model is completely independent allows the freedom to choose the most adequate technology to optimize for queries. Which can even be a completely different technology from the write side, for example, a non-relational denormalized built for search data store (winks at ElasticSearch). It seems the best of both worlds. Or is it?

Event sourcing with CQRS is the kind of architecture that is the sweet promise that brings tears to my eyes, like witnessing a powerful and beautiful once in a hundred years meteor in the night sky. As long as it is applied to the right use case… Otherwise, that meteor will come down and hit you in the face, and those tears will be of despair instead of happiness.

Eventual Consistency

By definition that queue between the write and read model can fill up, the system can have an unforeseen peak of usage and can take more than it is expected to process. You will say to yourself that will hardly happen, you have a fast component with a strong infrastructure behind it. However, this will happen. It will happen in the mid of the sales season of your e-commerce platform when the functionality is needed the most.

Eventual consistency became popular with the introduction of NoSQL databases and dealing with the challenges of distributed systems. Eric Brewer’s CAP theorem illustrates how a system can be either available or consistent in face of network partitions, but not both. Being eventually consistent allows a system to be scalable and stable, but at what cost?

Purists will say that consistency is a fairy tale, in the highly distributed world of big data to have your system available you need to be eventually consistent. Will be ready to (mis)quote the CAP theorem saying that is the proof that consistency belongs with the countless shipwrecks of the past drowned by the tsunami of big data.

This kind of mindset made it acceptable to sprinkle the magic dust of eventual consistency everywhere. How did we come from taking ACID for granted, from consistency being the very foundation of software and data storage to saying that “well everything’s eventually consistent deal with it”?

The theory says that in distributed systems everything is eventually consistent but the pragmatic view of the real world says that we need to be real careful about what we choose to make eventually consistent. If we choose to build a business-critical functionality around this eventual consistency can have dire ramifications. There are use cases that availability is the needed property of a system but there are also use cases where consistency also is, where is better to not make a decision rather than making it based on stale information. The sensibility to distinguish between these situations can be hard to master and sometimes impossible due to the transient nature of software development. This should always be questioned when choosing to use CQRS with event sourcing, deciding to do so always a risk.

Whole system fallacy

A whole system with every component based on event sourcing will turn the interactions between them complex and hard to read. It will be required digging down to each component to understand the information flow. While if every functionality affects the same event-sourced component will rapidly become an event-sourced monolith. Overall this pattern adds significant complexity and should be considered whether it’s worth it. Typically shines the most when pinpointing parts of the system that benefit from it, identifying a specific bounded context in DDD terms, but never on a whole system.

Task-based UIs

Given they are focused on the user intent they work with DDD quite well. It is seamless to create commands that translate this intent. However, if there is a strong requirement to follow a more traditional CRUD approach the adaptation effort is rather cumbersome and the result is all but satisfying. Also, your events will be based on a SomethingCreated or SomethingUpdated which has no business value at all. If the events are being designed like this then it is clear you’re not using DDD at all and you’re better of without event sourcing. Finally, depending on the requirements on how the synchronous the UI and the flow of the task is the eventual consistency can, and most of the time will, have a glitchy feel to it and deliver a poor user experience.

Event schema

There are techniques similar to adapters that convert the event before returning them to the application called upcasters. They can convert events to different versions giving for example more granularity. This, however, defeats the purpose of event sourcing, the stream of events is expected to show a history of what happened, with this the application is now publishing events that don’t even have. Associated with this you can save the new versions of the events called lazy upcasting, now the stream reflects what is being published but there are several different versions of the same event in the store which is a nightmare no manage. It is possible to change the schema to all events at once like would be done in a SQL table, which can mean a considerable downtime and a lot of complexity managing the moment of the change since all applications would have to change at once. In the end, events are immutable just live with it. Having different versions of the events is the best way to handle schema changes, similar to a REST API where the application would support both the old and the new version for a given amount of time. The drawback is the maintenance of the code handling all the different versions but the different applications have time to adapt to the new versions. Also, the stream will become intact reflecting what actually happens this is what event sourcing is supposed to do.

Independently on how the schema changes are handled, managing these changes is one of the most complex and error-prone drawbacks associated with event sourcing. A strategy should be prepared upfront and considered on the system design.

Event granularity

In theory and I find it to be a good rule of thumb is to have your commands and events reflecting the intent of the user staying true to DDD. They should be modeled using ubiquitous language and part of the domain value of the application will reside on the commands and events. However, if you manage to follow a more pragmatic approach you can avoid some serious impacts on the consumers of your events by understanding their different needs. Illustrating this on a simple example, if a given AddressStreetChanged event is published it clearly shows the intent of the user by changing the street of the address but how many of your listeners will need that information without for example the door number? To obtain it they have two options; either save the state of the address internally or ask the service that owns the data for the missing information. Both have dire consequences, the first you will have to worry about disk space, the extra effort of building that internal state, and keeping it synchronized. Also in a microservice architecture several copies of the data of the original system will appear everywhere which is a nightmare to manage, especially if the schema changes. Regarding the second option, since the read model is eventually consistent it is possible to retrieve information that isn’t up to date with the event, i.e. the consumer can receive the event faster than the originating system’s read model. In the previous example if the consumer application needed to validate the address for example they could retrieve a stale address that might fail the validation. Depending on the use case this can be unacceptable and trigger some complex inconsistencies that are hard to trace.

The events can’t be too small, neither too large they have to be just right. Having the instinct to get it right requires extensive knowledge of the system, business, and consumer applications, it’s very easy to choose the wrong design.

Operation flexibility

Either by a bug or by human mistake, now and then it’s always required to do a manual correction, someone has to run a SQL script or shuffle around some data. Usually, there is a support team in charge of this and they need the flexibility to fix something at the moment. On a traditional data store, a simple update will suffice. However, the events in an event store are immutable and can’t be deleted, to undo an action means sending the command with the opposite action. It is harder to affect multiple entities and requires a knowledge of the system, not like SQL that everyone knows. Overall these operations aren’t easy to do without some kind of tool prepared beforehand and make these operations more complex and error-prone, making it hard and complex for the team supporting these problems to do their job.

Wrapping it up

They have their limitations as everything in life. By knowing their limitations will empower you to make them truly shine.

Feel free to check my other articles:

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store