Ah got it. You mean that there is more/less data in the event now than before. Yes then the new field will be broadcast via thousands of events as each customer entity is updated.
This implements absolute fairness. However, will this assumption not limit your capacity as a SaaS business to offer different capacities/volumes to businesses which are willing to pay more. What if you want to give more compute power to “premium” customers?
As I mentioned in the article, event driven architectures are pub-sub driven and the publishers are not even aware of their subscribers. I wouldn’t know who is using the customer info and for what purpose, so I would emit one event and let anyone out there grab it.
I usually follow a two step step process. Emit an event for every create/update /delete of entity. The event includes the old state of the entity and the fields which have changed so that new state of the entity can be generated by overlaying the latter over the former. This “base” event can enable most use-cases.
While this moves the authentication responsibility out of Icebreaker service, surely it did not remove the overhead since authentication still has to happen on every request? End-to-end success/failure would not improve due to this IMO.
Congratulations. You just implemented CQRS from first principles :)
The only thing about this approach that has always bothered me is in thinking about what the source of truth is for data. Does Icebreaker now treat the cache as the source of truth for booking info? If so, how do you respond to event loss over the data pipeline?