“Classic” Event Sourcing on Top of Lazy One

Yurii Rashkovskii
Eventsourcing Publications
4 min readJun 8, 2016

I’ve received a number of comments on my proposal for lazy event sourcing / event sourcing database, both in person and online. Most vocal ones are those that claim that this is not event sourcing. Unfortunately, quite often I can’t get a more detailed answer. It’s easier to explain the line of thinking that went into current Eventsourcing implementation in person. But I can’t possibly meet everyone, so I’ll try to explain it here.

First of all, I want to show that you can use it to implement what we can call a “classic” event sourcing model (which is often complemented by DDD and CQRS), for a lack of a better distinctive term. These are some of the criteria I use to identify that model (it’s not an exhaustive list, and not all of them have to be true, so this set is definitely a bit loose):

  • Processing events as they come
  • Processing events at appropriate aggregates
  • Focusing on early domain binding (aggregates, process managers, etc.)
  • Producing a read-side “state” database by applying events
  • Being able to rebuild (“replay”) this database from events

Even though the lazy event sourcing model has a focus on late domain binding, you can still take proactive action upon every command journalling. That’s what entity subscribers are for. They can be used to filter out events we are interested in while the command is still being processed and receive a stream of them once they have been journalled. The way journalling works is that no event is considered journalled unless all other events produced by the command are successfully journalled, as well, making every command a transaction scope.

From these subscribers, we can bind the domain, produce any kind of read-side “state” database, etc. Now, if we want to replay events, we can query for all events of a certain type and reduce them to whatever we need, for example, to accommodate changes in the read-side database.

“Classic” event sourcing model in Eventsourcing

At this point, we’re just not using some of Eventsourcing’s capabilities in order to get some of the benefits of this “classic” model. In fact, real-world “lazy” event sourcing systems that use the entire range of Eventsourcing’s capabilities, will still end up using entity subscribers for early binding optimizations that are necessary for performance or reactivity of the system, as well as integrations with third-party or legacy systems.

Earlier, I mentioned process managers. In this model, process manager is just another subscriber that orchestrates a binding/business logic process upon a receipt of a certain type of event.

Now, I want to address the claim that the “lazy” model I am referring to is not actually event sourcing.

Lazy event sourcing model

It is a little bit difficult to address as arguments vary, but I’ll try to do my best.

One of the arguments against was that it doesn’t really employ DDD principles to design cleanly separated contexts. To which I answer (and thanks to somebody at the Polyglot Conference in Vancouver for the term!) is that we’re just doing late binding to the domain — not when the changes are recorded, but when the elements of the domain are requested. The immediate benefit of this is that we can have multiple, even partially overlapping domains at the same time, depending on how we look at the problem. It also allows to improve domains as we gain more knowledge relatively painlessly (no “global schema” to migrate). This approach does have some potential drawbacks. For example, the use of cleanly designed ubiquitous language for every domain is definitely harder to achieve, as we evolve our understanding of domains and the language used over time. However, this does help keeping track of evolution of the domains without having to have it all designed upfront (which is an impossibility in the real world anyway).

Another argument was that it is not event sourcing because Eventsourcing also persists commands. I think this is more of a misunderstanding. The fact that the commands are persisted doesn’t alter the role of events as records of actual state changes. They are there to preserve causality (why is that event there in the first place?) and log exceptions, should they occur.

Another misconception was that we use event store (or journal) as a read-side. It might appear that way at the first sight, because we query indices that are built on top of it. My own take on this is that indices are the database, for all practical purposes, and the fact that we use the journal to read out actual values is just a storage optimization technique (why store the same data twice?). If you think about using a relational database as a read-side, you have tables and indices. Without indices, you have to scan through rows (effectively, reductions of events), which is often inefficient, so when you have any reasonable amount of data, you’d have to establish indices. The difference here is that you’d index event reductions, while Eventsourcing indexes events themselves.

If you have more arguments on either side, please leave a comment! I want to be able to address them properly.

My goal is to make Eventsourcing even more suitable for the practitioners of the “classic” model. Our project contribution policy is C4, which essentially guarantees a right to contribute without roadblocks of value judgement. If you feel like something is missing, please contribute, code and problem reports are both valuable!

--

--

Yurii Rashkovskii
Eventsourcing Publications

Tech entrepreneur, open source developer. Amateur runner, skier, cyclist, sailor.