About Staged Event-Driven Architecture
A couple of years before microservices, there was the SEDA architecture
In January 2010 during a trip to London, I was lucky to have a drink with Zack, the South-African CTO at a Wealth Management firm. It literally changed my (technical) life. He introduced me to RabbitMQ, pub-sub architecture and the AMQP protocol at a time when JMS (Java Message Service), IBM MQ and TIBCO RDV were pretty much the default options for doing messaging within a Java environment.
AMQP allowed the decoupling of the message description, transport layer and client endpoint, which in this case could be Java or non-Java (.Net, Ruby, …). No more need for a bridge to translate message descriptions when my Java message producer had to communicate with a .Net consumer, and AMQP was providing us with a normalized way to describe messages, consuming endpoints and routing rules.
The day after my meeting with Zack, I contacted the folks at RabbitMQ. A series of meetings ensued during which I was presented with the concept of Stage Event-Driven Architecture (SEDA) where an application would be designed as a series of loosely coupled modules (stages) embedding the business logic and connected in and out by queues that would transport messages triggering specific events. It was at a time when we were actively thinking about redesigning our intraday post-trade platform and wanted to use “development patterns of 2010 not 2000.” It was the dawn of a new decade. With the SEDA architecture in mind, we designed the trade life cycle in a series of elementary stages that we called adapters (actually borrowing the terminology from Spring Integration). Each adapter was tasked with a specific function of the trade life cycle such as capture, allocation, matching, booking, reconciliation and was connected to other adapters by RabbitMQ queues transporting trades (events) in and out. As trades progressed along the stages between the adapters, they were further and further enriched until they end up in the booking platform.
Adapters themselves were developed using the Java Spring framework (Spring Integration in particular to support the event-driven architecture) and deployed inside Tomcat servers as WAR files. The messages published and consumed by adapters were described in FIXML which is the default normalized way to describe financial instruments and events. This loosely-coupled, real-time, event-driven architecture was a huge departure from the legacy system which was based on a bunch of daemon processes moving trades in and out between staging SQL databases in bulk and at a pre-defined interval of time. It was something along the line of: every 10 minutes, select 1,000 trades from Database A, enrich them (ie. apply a SQL procedure to them) and insert the updated records into Database B. It was a horrible architecture in the event of high trading activity (think “flash crash”) because the message throughput was constant by design.
SEDA was certainly the best architecture to pick for developing a trade processing system which can be naturally decomposed into stages. It was a very innovative architecture at that time and is still very relevant today when you deal with a system in which an object has to be moved from point A to H with different transformations happening along the way. But development frameworks continued to evolve (they always do, and fast). Microservice architecture appeared around 2012 and a couple of years later, Spring released Boot and Cloud which provided a complete environment to develop microservices in Java. All the critical components were ready and mature when we engaged the development of the server-side services to power the Waveum back-end sometime in the winter of 2016.