Welcome Events

AlexPodles
16 min readDec 12, 2023

One of the most important, and in my experience, most misunderstood concepts nowadays are events and event-driven architecture in particular.

I have faced situations many times where people struggled to answer why we need to use events instead of REST, how to properly design them, and more importantly, understand the eventing flow. It’s also crucial to understand that events are an architectural approach, not a specific technology like RabbitMQ or SQS, although they can be implemented using such software.

In this article, I will try to explain what is the main purpose of events, why we need them, and what major properties they have.

For such explanations, I like using the reinventing approach, where we take a problem as an example and try to solve it as if we don’t know the answer (or specific technologies if they are the answer), aiming to find a rationale for using specific architectural decisions.

So, let’s do it now as well. For this will need to step back and try to solve important business requirements without using events.

First attempt: naive implementation

Imagine next abstract requirements:

  1. Simple REST controller for submitting the order
  2. On submission, the order should be persisted
  3. On submission, inventory should be decreased
  4. On submission, an email should be sent to the user

Seems easy. Let’s dive into implementation.

using Order.API;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<OrderService>();
builder.Services.AddSingleton<InventoryService>();
builder.Services.AddSingleton<NotificationService>();

var app = builder.Build();

app.MapPost("/order/{id}/submit", (string id, OrderService service) => service.Submit(id));

app.Run();

and services:

public class OrderService
{
private readonly InventoryService _inventoryService;
private readonly NotificationService _notificationService;
private readonly ILogger<OrderService> _logger;

public OrderService(InventoryService inventoryService, NotificationService notificationService, ILogger<OrderService> logger)
{
_inventoryService = inventoryService;
_notificationService = notificationService;
_logger = logger;
}

public string Submit(string id)
{
var order = new Order(id, ProductId: "1", Quantity: 1);
PersistOrder(order);
_inventoryService.ReserveStock(order);
_notificationService.SendEmail(order);

return $"Order {id} submitted";
}

private void PersistOrder(Order order) => _logger.LogInformation($"Persisted order {order}");
}

public class InventoryService
{
private readonly ILogger<InventoryService> _logger;
public InventoryService(ILogger<InventoryService> logger) => _logger = logger;
public void ReserveStock(Order order) => _logger.LogInformation($"Inventory reserved for order {order}");
}

public class NotificationService
{
private readonly ILogger<NotificationService> _logger;
public NotificationService(ILogger<NotificationService> logger) => _logger = logger;

public void SendEmail(Order order) => _logger.LogInformation($"Email sent for order {order}");
}

public record Order(string Id, string ProductId, int Quantity);

And a plain test:

curl -X POST http://localhost:5175/order/123/submit

info: Order.API.OrderService[0]
Persisted order Order { Id = 123, Quantity = 1, Status = Submitted }
info: Order.API.InventoryService[0]
Inventory reserved for order Order { Id = 123, Quantity = 1, Status = Submitted }
info: Order.API.NotificationService[0]
Email sent for order Order { Id = 123, Quantity = 1, Status = Submitted }

Identifying design flaws

So far so good. But what can go wrong? Even with such implementation, we are facing several design issues:

  1. Single Responsibility Principle (SRP) violation. Even with using abstractions to hide implementation details, we are still dependent on the external APIs. Also, these abstractions might be the responsibility of the external team. That increases the axis of change, as now OrderService has several actors that might have competitive changes for it. As a simple example, you can think about what will happen if the Inventory team decides to change ReserveStock semantic. At least, it will require a certain level of communication with the Order team.
  2. Open-Closed Principle (OCP) violation. Can we easily extend Submit functionality without actually modifying it? Just imagine we have another requirement to create a payment transaction after order is submitted, with the current implementation we will need to make these changes directly in the OrderService. Again this will require communication with the Orders team. Communication isn’t bad, but if we explore the nature of this communication we will find the next problem.
  3. Dependency Inversion Principle (DIP) violation. This principle states that upstream services should not depend on downstream ones and both should depend on abstractions. The problem in our case is: that when we need to add payment creation functionality we need to go to the upstream service, which is Orders in our case, and initiate changes there.

Design improvement

Cool, understanding the problem is a solid part of solving it, so with this knowledge, we will try to adjust our system.

From my experience, I can say that many design principles are interconnected and a violation of one of them can easily result in a violation of another. So my assumption here: if we will fix one of the described violations:

  1. Other violations will be fixed automatically
  2. It will be easier to fix the remaining violations

Ok, let’s start from the beginning. Can we fix the SRP violation now? To do this, we will need to decouple the current implementation and instead of injecting abstractions directly, we will need to observe some artifacts and react to them. We can implement some sort of watcher, that will look into orders collection and find new submitted orders. From the first point of view, this might solve the problem, even if we don’t take into account technical complexity. Indeed, now other Payments team can just implement a similar watcher, and build additional functionality on top of it. Unfortunately, this will just move the problem more deeply to the collection level, and here is why:

  1. We should give full access to our collection, that breaks encapsulation and abstraction of Orders service.
  2. The same SRP violation as our collection is using for both persisting and managing orders state and also looking for newly submitted orders

This might be fixed a bit with a new collection for the data we want to share with other participants and we will use this information later.

Maybe OCP is a key to fixing our problem. When I am thinking about OCP, decorators are the first on my mind, so basically, we need to implement the following:

InventoryServiceDecorator(Submit(..))
NotificationServiceDecoraor(Submit(..))
PaymentServiceDecorator(Submit(..))

and in this case, we can remove all our abstractions from the OrderService and use the result of the Submit method directly in our decorator. This is a good piece of information for us and will use it later as well, but at the moment we don’t have a good approach to fetching newly submitted orders.

Let’s move on and examine our DIP violation. Will say it again: upstream services should not depend on downstream ones and both should depend on abstractions. Seems that one important part is missing in our implementation — abstractions. And really, do we have a solid abstraction on which both upstream and downstream services can depend? It seems not something hard to introduce, it doesn’t require tons of technical implementation or architectural decisions.

Our abstraction will look like this:

public interface IReactOnSubmittedOrders
{
public void React(Order order);
}

and changes in services:

public class InventoryService : IReactOnSubmittedOrders
{
private readonly ILogger<InventoryService> _logger;
public InventoryService(ILogger<InventoryService> logger) => _logger = logger;
public void React(Order order) => _logger.LogInformation($"Inventory reserved for order {order}");
}

public class NotificationService : IReactOnSubmittedOrders
{
private readonly ILogger<NotificationService> _logger;
public NotificationService(ILogger<NotificationService> logger) => _logger = logger;
public void React(Order order) => _logger.LogInformation($"Email sent for order {order}");
}

public class OrderService
{
private readonly IEnumerable<IReactOnSubmittedOrders> _downstreamServices;
private readonly ILogger<OrderService> _logger;

public OrderService(IEnumerable<IReactOnSubmittedOrders> downstreamServices, ILogger<OrderService> logger)
{
_downstreamServices = downstreamServices;
_logger = logger;
}

public string Submit(string id)
{
var order = new Order(id, Quantity: 1, Status: "Submitted");
PersistOrder(order);

foreach (var downstreamService in _downstreamServices)
downstreamService.React(order);

return $"Order {id} submitted";
}

private void PersistOrder(Order order) => _logger.LogInformation($"Persisted order {order}");
}

and we will need to register the service like this:

builder.Services.AddSingleton<IReactOnSubmittedOrders, InventoryService>();
builder.Services.AddSingleton<IReactOnSubmittedOrders, NotificationService>();

If we run our test API call, we should see the same result.

And here, by small steps, we came to the first important event property — broadcasting.

The nature of our method requires the following communication:

Broadcasting service decisions to downstream services

What is the subject of this broadcast? It is service decisions, simple like this. Every time, Orders module receives an external request it decides whether the received order can be submitted, and if it can, we need to transfer this decision to the downstream services.

So now, every time we are interested in this decision, we will need just to implement a specific abstraction on the external module side (we can consider Inventory and Notification as separate modules here).

Great job! Let’s think about what we’ve fixed so far:

  1. SRP — yes, we don’t have multiple axes of change anymore as now Orders team is dictating the contract that should be used.
  2. OCP — yes, in case additional modules need to react to the fact that the order was submitted, we just need to implement the provided abstraction (and register service in DI container but it is more technical details). It means that now we can extend the Submit functionality by adding our Payments logic without actually OrderSrvice modification.
  3. DIP — partially. But why partially, we’ve added an abstraction, and now the upstream service should not depend on the downstream as they are both dependent on the provided abstraction. Only the second part of this statement is true. We will need to make additional analyses to figure out why the first part is still not fixed.

Ok, the first part is: that the upstream service should not depend on the downstream, and at first sight, it might be like this, but consider the next case:

public class BuggyService : IReactOnSubmittedOrders
{
public void React(Order order) => throw new Exception("i was not able to react on submitted orders properly");
}

It looks like our Order service is pretty dependent on the downstream service, and this is seriously impacting Orders team as such BuggyService leads to errors in the Submit functionality, so Orders team needs to collaborate with another team to request changes or fix a bug, not good right?

To make things worse, let's consider the next service:

public class DispatchingService : IReactOnSubmittedOrders
{
public void React(Order order)
{
// i will sleep until i get manually confirmed by appropriate manager
Thread.Sleep(-1);
}
}

While it is still possible to react to the errors received from the downstream services, it is not clear what to do when the downstream process is waiting for some manual actions.

And here comes the second major event property — asynchronousity. It comes from the unidirectional nature of decision broadcasting. And it makes sense, when we look more precisely we will discover that broadcasting upstream service decisions doesn’t require any response from the downstream service and the main reason is the decision is already taken, so the downstream service considers it as a fact.

Let's go and fix this issue:

public interface IReactAsyncOnSubmittedOrders
{
public Task ReactAsync(Order order);
}

and we will need to change all service implementations from IReactOnSubmittedOrders to IReactAsyncOnSubmittedOrders and implement it:

public async Task ReactAsync(Order order) => _logger.LogInformation($"Inventory reserved for order {order}"); // InventoryService.cs

public async Task ReactAsync(Order order) => _logger.LogInformation($"Email sent for order {order}"); // NotificationService.cs

public async Task ReactAsync(Order order) => throw new Exception("i was not able to react on submitted orders properly"); // BuggyService.cs we won't see this exception uless await, but still

and our OrderService will look like this:

public class OrderService
{
private readonly IEnumerable<IReactAsyncOnSubmittedOrders> _downstreamServices;

public string Submit(string id)
{
// ..

foreach (var downstreamService in _downstreamServices)
downstreamService.ReactAsync(order); // we are not awaiting here, as we want async operation

// ..
}
}

Do not forget to re-register downstream services:

builder.Services.AddSingleton<IReactAsyncOnSubmittedOrders, InventoryService>();
builder.Services.AddSingleton<IReactAsyncOnSubmittedOrders, NotificationService>();

Again, running the test example will show you the original output.

Introducing Broadcaster

All done here? Not yet. Despite we’ve achieved goals, our OrderService is still aware of all downstream services that are going to wait for application decisions.

Is it bad: yes. Why: coupling and performance.

Coupling and lack of abstractions: it was simple to inject services when we had services under the same application. Imagine we are running modulith or microservices. In this case, we will most likely have different applications for Orders and Inventory modules, with their databases and infrastructure. Now, it will become a bit harder to keep the described solution.

Performance: Imagine a really popular decision on which more than 2–3 dependants. We still need to iterate overall collection and trigger decision processing.

All we need to do is introduce another abstraction and move iteration to the specific broadcaster:

Broadcaster component

In the simplified form, it will look like this:

public interface IBroadcastDecisions<T> where T: Decision
{
public Task Broadcast(T decision);
}

public abstract record Decision;

our OrderService will change like this:

public record OrderSubmitted(Order SubmittedOrder) : Decision;

public class OrderService
{
private readonly IBroadcastDecisions _broadcaster;

public string Submit(string id)
{
// ..

_broadcaster.Broadcast(new OrderSubmitted(order));

// ..
}
}

and our broadcaster:

public class Broadcaster : IBroadcastDecisions<OrderSubmitted>
{
private readonly IEnumerable<IReactAsyncOnSubmittedOrders> _downstreamServices;

public Broadcaster(IEnumerable<IReactAsyncOnSubmittedOrders> downstreamServices) => _downstreamServices = downstreamServices;
public async Task Broadcast(OrderSubmitted decision)
{
foreach (var downstreamService in _downstreamServices)
downstreamService.ReactAsync(decision.SubmittedOrder);
}
}

Testing endpoint again, and it works.

When we try with other decisions, we will notice that this solution works but has some issues with scalability, because we will need to extend Broadcaster with specific IBroadcastDecisions every time we need to add a new decision type broadcasting, for example:

public class Broadcaster : 
IBroadcastDecisions<OrderSubmitted>,
IBroadcastDecisions<OrderFailed>
{
private readonly IEnumerable<IReactAsyncOnSubmittedOrders> _downstreamOrderSubmittedServices;
private readonly IEnumerable<IReactAsyncOnFailedOrders> _downstreamOrderFailedServices;

public Broadcaster(
IEnumerable<IReactAsyncOnSubmittedOrders> _downstreamOrderSubmittedServices,
IEnumerable<IReactAsyncOnFailedOrders> _downstreamOrderFailedServices)
{
_downstreamOrderSubmittedServices = downstreamOrderSubmittedServices;
_downstreamServices = downstreamOrderFailedServices;
}

public async Task Broadcast(OrderSubmitted decision)
{
foreach (var downstreamService in _downstreamOrderSubmittedServices)
downstreamService.ReactAsync(decision.SubmittedOrder);
}

public async Task Broadcast(OrderFailed decision)
{
foreach (var downstreamService in _downstreamOrderFailedServices)
downstreamService.ReactAsync(decision.FailedOrder);
}
}

Let's rewrite it a bit to make it more general.

  1. We will use a generic interface that will support every decision type
public interface IReactAsyncOn<in T> where T: Decision
{
public Task ReactOn(T decision);
}

2. We will change broadcaster

public interface IBroadcastDecisions
{
public Task Broadcast(Decision decision);
}

public class Broadcaster : IBroadcastDecisions
{
private readonly IServiceProvider _serviceProvider;

public Broadcaster(IServiceProvider serviceProvider) => _serviceProvider = serviceProvider;

public async Task Broadcast(Decision decision)
{
var decisionType = decision.GetType();
var reactOnType = typeof(IReactAsyncOn<>).MakeGenericType(decisionType);
var downstreamServices = _serviceProvider.GetServices(reactOnType);
foreach (var downstreamService in downstreamServices)
{
var reactOnMethod = reactOnType.GetMethod(nameof(IReactAsyncOn<Decision>.ReactOn));
if (reactOnMethod is null) continue;
var task = (Task)reactOnMethod.Invoke(downstreamService, new object[] { decision })!;
await task;
}
}
}

public class NotificationService : IReactAsyncOn<OrderSubmitted>
{
public async Task ReactOn(OrderSubmitted decision) => _logger.LogInformation($"Email sent for order {decision.SubmittedOrder}");
}

public class InventoryService : IReactAsyncOn<OrderSubmitted>
{
public async Task ReactOn(OrderSubmitted decision) => _logger.LogInformation($"Inventory reserved for order {decision.SubmittedOrder}");
}

New findings

Now, let's compare new and old implementations:

// old
public interface IReactAsyncOnSubmittedOrders
{
public Task ReactAsync(Order order);
}

// new
public interface IReactAsyncOn<in T> where T: Decision
{
public Task ReactOn(T decision);
}
public abstract record Decision;

Can you find something interesting? What if the semantics of the new implementation are changed to:

public interface IEventHandler<in T> where T: Event
{
public Task Handle(T @event);
}

public abstract record Event;

Does it look familiar to what you see in day-to-day life when you are working with events? For me, it represents the same semantics that is possible to find in every service bus framework implementation.

This brings us to the third important property: events are just interfaces that will be implemented somewhere. The most important in the event is its name and payload which is why you don’t have anything else there.

When you are creating an event, you are just saying that the continuation of the current activity will be somewhere else and then it will be the downstream service's responsibility to define how exactly this continuation should be implemented.

That is why it actually called Event-Driven — because now events become the transport that moves different parts of your application.

Design considerations

Now we know that events are just interfaces, what does this information give to us? A very careful reader might have noticed that not all SOLID principles were used. How can we use the Interface Segregation Principle (ISP) with this knowledge? This principle states that no client should be forced to depend on methods it does not use. As we don’t have methods inside our events, we can rephrase a bit: no handlers should depend on decisions they are not interested in.

Consider the next event as a (didactic) example:

public class UserActionCompleted : Event
{
public string User { get; set; }
public enum PerformedAction { get; set; }
public Dictionary<string, string> ActionDetails { get; set; }
}

enum PerformedAction
{
PurchaseItem,
UpdateProfile
}

In this case, a downstream service that is interested in users' decisions in purchasing items, needs to implement the next handler:

public class UserPurchasedItem : IEventHandler<UserActionCompleted> 
{
public async Task Handle(UserActionCompleted @event)
{
if (@event.PerformedAction != PerformedAction.PurchaseItem)
{
return;
}
}
}

Does it remind the cases when we need to throw an Exception when our type trying to implement not segregated interface?

The treatment for this issue is the same as for the case above — separate events by decisions made by the upstream service:

public class UserPurchaseEvent : Event
{
public string User { get; init; }
public decimal Amount { get; init; }
}

public class UserProfileUpdateEvent : Event
{
public string User { get; init; }
public string NewDetails { get; init; }
}

This principle should be the main beacon during event design as it helps to be focused on what you are handling and helps not to get lost in the dozens of decisions your application will make.

It is hard to say, that at the end of the day, we should have a 1–1 relationship between events and decisions our application does, as it is still an architectural choice how and if to group them, but the decisions' granularity is more your friend than an enemy.

Another design point is related to the event evolution. During the lifetime of your application, it is impossible not to face event adjustments and it is really important to keep a certain level of predictability here.

What can be a good guidance here? You are totally right, it is Liskov Substitution Principle (LSP). This principle states, that objects of a superclass should be able to be replaced with objects of a subclass without affecting the correctness of the program. We can rephrase it into this: once a contract is established, its invariants must be kept till the end of its days.

Let's see the example:

public class UserPurchaseEvent : Event
{
public string User { get; init; }
public decimal Amount { get; init; }
}

In this case, our contract consists of:

  1. Event name — UserPurchaseEvent
  2. Event structure — fields User with type string and Amount with type decimal.

Consider the next handler

public class UserPurchasedItem : IEventHandler<UserPurchaseEvent> 
{
public async Task Handle(UserPurchaseEvent @event)
{
// ...
}
}

and next cases:

  1. At some point in time, we changed the event name to UserPurchasedProduct. Does it violate LSP? Yes, even if the event is part of a shared contract, and you can easily change the name to a new one on every downstream service (when you update the package version or similar), you still need to be aware that most of the time event name (most likely fully qualified assembly name) will be used for the name of the message queue. The lack of coordination with downstream services might result in a situation where there won’t be handlers for the newly defined event type.
  2. At some point in time, we removed the Amount field (or changed its type). Does it violate LSP? The answer is still yes. You might ask, but what if this field is not used across all existing handlers, does it mean that subclass (modified event) is aligned with superclass? I’d personally say, that knowledge about downstream services contradicts DIP so you need to approach this question like you don’t know/care if it is used somewhere or not. You can think about the next hypothetical test:
Assert.True(UpdatedUserPurchasedItem.StartsWith(OriginalUserPurchasedItem));

where OriginalUserPurchasedItem and UpdatedUserPurchasedItem represent some serialized form of your event.

3. At some point in time, you added a new field. Does it violate LSP? Most likely it is not. Unless we are not breaking expectations set by the behavior of the contract. The expectation in this case will be an enclosed decision, so if our additive changes the meaning of the event, I’d consider it as a violation of LSP as well. An example of such a violation might be illustrated:

public class UserPurchaseEvent : Event
{
public string User { get; init; }
public decimal Amount { get; init; }
public byte[] Logo { get; init; }
}

Moving to the broader scope

Previously, we’ve slightly touched coupling. At some point in time, you will decide to separate your application into wider modules (like microservices) and fortunately, events support it natively. All you need is to decide which technology to use.

Let’s rewrite our architecture a bit to support the required changes:

Moving to microservices

Looks easy, all we’ve added is an infrastructure layer that will help us distribute decisions to downstream services.

Let’s start implementing discussed changes:

  1. Add new applications
dotnet new web -n Inventory.API
dotnet sln add ./Inventory.API/Inventory.API.csproj

dotnet new web -n Notification.API
dotnet sln add ./Notification.API/Notification.API.csproj

2. Move corresponding services to the new applications

3. To make things a bit easier we will use one of the popular service bus implementations Masstransit with RabbitMQ support

 dotnet add Order.API/Order.API.csproj package MassTransit.RabbitMQ
dotnet add Inventory.API/Inventory.API.csproj package MassTransit.RabbitMQ
dotnet add Notification.API/Notification.API.csproj package MassTransit.RabbitMQ

4. Add Masstransit registration in all services

builder.Services.AddMassTransit(config =>
{
config.SetKebabCaseEndpointNameFormatter();
config.AddConsumers(typeof(Program).Assembly);
config.UsingRabbitMq((context, cfg) =>
{
cfg.Host(new Uri("rabbitmq://guest:guest@localhost/"));

cfg.ConfigureEndpoints(context);
});
});

5. Adjust existing services

// Inventory.API
public class InventoryService : IConsumer<OrderSubmitted>
{
private readonly ILogger<InventoryService> _logger;
public InventoryService(ILogger<InventoryService> logger) => _logger = logger;
public async Task Consume(ConsumeContext<OrderSubmitted> context) => _logger.LogInformation($"Inventory reserved for order {context.Message.SubmittedOrder}");
}

// Notification.API
public class NotificationService : IConsumer<OrderSubmitted>
{
private readonly ILogger<NotificationService> _logger;
public NotificationService(ILogger<NotificationService> logger) => _logger = logger;
public async Task Consume(ConsumeContext<OrderSubmitted> context) => _logger.LogInformation($"Email sent for order {context.Message.SubmittedOrder}");
}

// Order.API
public class OrderService
{
private readonly IBus _broadcaster;
private readonly ILogger<OrderService> _logger;

public OrderService(IBus broadcaster, ILogger<OrderService> logger)
{
_broadcaster = broadcaster;
_logger = logger;
}

public async Task<string> Submit(string id)
{
var order = new Order(id, Quantity: 1, Status: "Submitted");
PersistOrder(order);

await _broadcaster.Publish(new OrderSubmitted(order));

return $"Order {id} submitted";
}

private void PersistOrder(Order order) => _logger.LogInformation($"Persisted order {order}");
}

6. And finally, run RabbitMQ locally

docker run -d --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management

As usual, let’s test our endpoint with the same command.

As a result, we should see the same output as before but now from a distributed environment

Events consumed by extracted service

Most likely you are dealing with the same architecture in your day-to-day life, maybe with some variations in technologies, frameworks, or languages but the essence of the described architecture will remain the same.

Hopefully, this journey was interesting and insightful about the nature of event-driven architecture, its patterns, and its important properties.

Conclusion

In light of our journey, it’s clear that Event-Driven Architecture is not just a trend but a fundamental paradigm that will continue to shape the landscape of software architecture and engineering for years to come.

EDA’s ability to handle complex, distributed systems positions it at the forefront of architectural trends, especially as we venture further into an era dominated by cloud computing, big data, and real-time processing.

Of course, Event-Driven Architecture is much more complex than what we’ve explored here. But I hope this article has given you a solid starting point to understand why and where EDA can be effectively applied in your own projects.

--

--