The Magic of Message Orchestration

What is message orchestration and why do we need it?

Dick Dowdell
Nerd For Tech
5 min readApr 4, 2023

--

Message orchestration can be a key to implementing component models that deliver on the promise of microservices without the complexity and performance penalties so frequently paid. Let’s take a look at what message orchestrators are and how we can use them to tame microservices.

What Is a Message Orchestrator?

A message orchestrator is software that facilitates passing messages between software components. It accepts messages in a format understood by the sender and delivers messages in a format understood by the receiver.

Orchestrators can intelligently route messages synchronously to a single recipient as a request with an expected response — or asynchronously to subscribers via a message queue.

Figure 1: Orchestrated Message Flow

A message is usually delivered over a network — but is delivered as a procedure call if the sender and receiver are within the same runtime process.

In the example in Figure 1, messages M2 thru M4 originate from Component 1 and are delivered to the target components (2, 3, and 4). The maps of component addresses are automatically shared among orchestrators.

Component Model

Actor model services are stateless and reactive, processing input messages, sending output messages, and publishing events. All messages and events can be passed through an actor’s precondition and postcondition validation, guaranteeing that assertions of required data integrity are always enforced.

Actors are close to an ideal component model for services, because:

  • Actor instances are reactive and execute rules, logic, and data transformations only when reacting to a message/event.
  • Actor instances are absolutely reentrant and stateless. They react to one message at a time and have no memory of previous messages processed. All data needed to react to a message must be in the message itself or in a persistent datastore. That means that any instance of a specific actor type can instantaneously replace any other instance of that same type — enabling seamless failover, scaling, and load balancing to be implemented.
  • Actor instances pass messages to other actor instances when they need them to do something.
  • Actor instances publish events when they need to tell interested subscribers about something.
  • An actor instance bounded by one context can pass messages to, or publish events for, actor instances bounded by another context — enabling it to use services developed, deployed, and maintained by other teams.

Message Passing

Component patterns like the actor model communicate through message passing. The requester sends a message to a service or publishes an event and relies on the receiving service and its supporting infrastructure to then select and execute the appropriate logic.

A request asks that something be done and an event reports that something has happened. The receiver decides what to do based upon the purpose of the message and its payload.

Both asynchronous event messaging and synchronous request-response messaging can be implemented, giving application developers leverage to optimize communications for specific use cases and performance objectives — all within a common unifying infrastructure.

Though messages can be passed using HTTPS, that is an implementation choice. Use of a message orchestrator keeps open the option of using whatever message transport protocols best support specific application use cases.

It is the orchestrator’s job to deliver a message to the optimum instance of its target component by the most efficient means available — via a protocol like HTTPS or WebSockets, a high-speed message queue like Kafka or ActiveMQ Artemis, or as a direct call. Orchestrators manage the failover, scaling, and self-organizing capabilities of the messaging model.

The Problem with Microservices

The microservice architectural pattern tends to create more individual components than most other architectural approaches. As the number of things and connections between them grows, complexity increases dramatically.

Figure 2: Complexity = n(n-1)/2

The failure to manage complexity is one of the major contributors to the failure of microservices implementations and impacts the cost of application design, development, testing, deployment, operation, and maintenance.

Complexity is a fundamental limiting factor in the successful implementation of large service-oriented systems. Top-down hierarchical configuration, as used in most systems, is not well-suited to cope with that complexity. A better solution is needed.

Plug and Play

For that solution we can look to plug and play, the way modern operating systems cope with attached devices. When connected, a plug and play device is sensed by the operating system, which then queries it to determine its type, capabilities, and the interfaces necessary to communicate with it.

A plug and play component is a composable microservice that — when added to a Jakarta EE server’s classpath, or when deployed to a cloud container — is registered with the nearest message orchestrator.

Upon registering with an orchestrator, the component becomes a part of the application and, subject to applicable security constraints, is able to send and receive messages, publish and subscribe to events, and seamlessly join in the functionality of the application system.

An individual requester does not need to know the network location of any other services with which it communicates, the message orchestrator with which it has registered is responsible for that.

This decentralized structure provides resiliency and robustness. When any element cannot be reached, the request can be automatically redirected to a like element.

A successful service architecture implements a decentralized structure where complex capabilities can emerge from the interaction of relatively simple parts — while at the same time minimizing the complexities of configuration and deployment.

Wrapping Up

This kind of message orchestration, and the dynamic service discovery it enables, was originated by Sun Microsystems in 1998 with Jini — which provided the infrastructure for a distributed service-object-oriented architecture model as a foundation for Internet of Things implementations.

Jini was, unfortunately, ahead of its time and tightly coupled to Java stubs and Java Remote Method Invocation — so it didn’t really catch on. It’s final release was as the open source Apache River project in 2016.

Jini pioneered many excellent solutions for distributing and managing services across networks — so when the team I work on began building a truly distributed microservices infrastructure, we started with Jini as a model.

We replaced the less appropriate concepts — added a few of our own — and implemented our model to be deployable using cloud containers, Jakarta EE containers, or a combination of both. We‘ve been very pleased with the results.

If you found this article useful, a clap would let us know that we’re on the right track.

Thanks!

--

--

Dick Dowdell
Nerd For Tech

A former US Army officer with a wonderful wife and family, I’m a software architect and engineer who has been building software systems for 50 years.