Patrick Lee Scott
Jan 9 · 2 min read

Glad you asked!

I published an article that was subtly about Event Sourcing on Monday — there was a link to it in this article but it’s not super clear that it’s actually about event sourcing.

Read through https://hackernoon.com/complicated-patterns-arent-always-that-complicated-usually-it-s-the-simple-ones-that-bite-you-caf870f2bf03.

It shows how to use sourced and sourced-repo-mongo to store events in mongo.

RabbitMQ, unlike Kafka, keeps a copy of every event in each queue that is to be consumed, and once it has been consumed it is removed. So in RabbitMQ’s case it wouldn’t be possible to keep them there.

Kafka is a distributed log, so it could be used as persistence for the events.

They are kind of each other’s opposites in a way. RabbitMQ is a “smart producer” feeding “dumb clients”, and Kafka is a “dumb producer” which uses “smart clients” which keep track how many messages in a topic that client has processed.

I think a big confusion about event sourcing is that some people talk about doing it at a system level rather than an application level. This is done by updating a “stream” with the latest state of the aggregate and the events just aren’t removed from the topic.

Even so, you may want to still store events in mongodb, and hook into events about aggregates changing, or even bus.publish on snapshots for optimizations to write to the Kafka stream from a denormalizer service.

Kafka streams essentially allow you to create projections on the fly through joins and aggregations allowing you to do some demoralization on the fly.

The Kafka version of servicebus is still a WIP though!

As far as caching goes — that expands on CQRS. Command Query Responsibility Segregation.

Your models are immutable sources of truth which emit events about how they change. They are not where you query data from. That is segregated as the name implies.

So the denormalizers I mentioned earlier will write to a database in a format that is optimized for reads. Then generally, I have GraphQL read from those Read Databases, which brings everything full circle with App devs. GraphQL mutations trigger commands which mutate models which emit events which denormalizers subscribe to which update read models that GraphQL queries from or subscribes to. This gives you a Unidirectional workflow.

    Patrick Lee Scott

    Written by

    I make things for the internet, that scale, look nice, and make money!