Designing Microservices

A Practical Approach to Designing and Building Microservices

Dick Dowdell
Nerd For Tech
19 min readSep 6, 2021

--

How to do the hardest part of microservices implementation: converting legacy application functionality or new application requirements into microservices that truly exploit the advantages of the microservice architectural pattern.

First — Why Do Microservices Matter?

Almost everything in modern life is, in some way, impacted by computing and computer software. Debilitating software development backlogs are already a pervasive problem — and computing is in the midst of a major paradigm shift to the hybrid cloud. A serious lack of speed and agility when responding to competitive challenges and opportunities is the inevitable result. It’s a lot like running a race in heavy boots while your competition is wearing track shoes.

We need new patterns of software design and architecture to better address both development backlogs and the benefits of the hybrid cloud. The microservice architectural pattern is one of those essential new patterns.

What Kinds of Problems Do Microservices Address?

The microservice architectural pattern is directed at some of today’s more pressing software development, deployment, and operational issues by:

  • Shortening software development cycles by optimizing agile software development, delivery, and maintenance practices.
  • Enabling rapid application feature iteration by simplifying software testing, continuous integration, and continuous delivery.
  • Exploiting the automated deployment, scaling, and fail-over capabilities available with cloud containers and container orchestration.

The idea is that by increasing the isolation between software components, microservices can deliver small discrete parts of a system both rapidly and independently, and by utilizing containers and container orchestration can deliver a high degree of horizontal scalability and fault tolerance across cloud clusters.

So What Is a Microservice?

The microservice architectural pattern is an effective way to break down application requirements into manageable, independently deployable components that can be connected together to form entire applications. The pattern is called microservice because its components tend to be smaller than the traditional Service-Oriented Architecture services that have been popular for more than a decade — but it is so much more than just smaller.

Done correctly, the microservice pattern meets or exceeds all the points and the purposes of the famous memo that helped set the stage for Amazon’s unparalleled business agility. This memo is well worth taking a moment to read. It is fundamental to the reasons why microservices make good business sense.

An individual microservice:

  • Implements a task (or set of closely related tasks) within a single domain bounded context. This is a fundamental characteristic of a microservice and promotes the high level of granularity and separation of concerns that preserves microservice autonomy and independent deployability.
  • Is loosely-coupled, communicating via message passing or events, and needs little or no knowledge of the definitions of other microservices — enforcing separation of concerns.
  • Is autonomous and can be developed and modified with less coordination among the involved development teams — promoting sound agile development practices.
  • Is independently deployable and can be individually tested, rolled out, and rolled back without impacting other microservices — enabling cloud-based automated deployment, scaling, and failover.

Keep these four basic constraints in mind. If an application component does not meet them, it may still be a service, but it is not a microservice — and it is unlikely to deliver all the benefits promised by the microservice architectural pattern.

Microservices as Services

It is important to understand that though microservices are a type of Service-Oriented Architecture, not all SOA implementations are microservices. Using the popular SOA frameworks can make it difficult — if not impossible — to realize the full promise of microservices.

DISCUSSION: Spring Boot is an open-source micro framework maintained by a company called Pivotal Software. It provides Java developers with a platform to get started with an auto configurable production-grade Spring application. With it, developers can become productive more quickly without losing time on preparing and configuring their Spring application. Spring Boot makes creating SOA services cleaner and easier by hiding the complexities of building and configuring services that run in Java servlet containers — and Spring Boot creates executable JAR files that can be deployed in cloud containers. But Spring Boot’s individual services are not independently deployable and its executable JARs start up in seconds rather than the milliseconds required of a microservice. Spring Boot is an excellent choice for building and deploying traditional SOA applications, but every Spring Boot-generated executable JAR contains an embedded Web server and a subset of the Spring framework. It is not a microservice.

Cloud-native applications are specifically implemented to provide a consistent development, deployment, automated management, and communications model across private, public, and hybrid clouds. They are designed to exploit the automated deployment, scaling, and failover capabilities available through containers and container orchestration. Traditional SOA frameworks cannot readily support this.

What Makes Designing for Microservices Different?

A microservice is a loosely-coupled, autonomous, and independently deployable component. Those attributes are not unique to microservices. What makes microservices a bit more of a design challenge is that a microservice operates totally within a domain bounded context.

The only data a microservice can see or modify is either in its bounded context or in the message to which it is reacting. A microservice is reactive, stateless, and reentrant. It reacts to messages and maintains no state from one message to the next. No data survives between messages unless it has been written to persistent storage via the microservice’s bounded context.

Until one gets used to them, these constraints can seem to be pretty limiting. So what benefits do they provide? What do we get back that makes the constraints worth learning to use? Let’s read on.

What Is a Domain Bounded Context?

The first challenge in microservice design is drawing the boundaries around each individual microservice. That’s where understanding the Domain-Driven Design concept of a bounded context is very useful.

In plain English, the word context refers to the rules and conditions within which something occurs or exists. We use a bounded context to align an individual microservice’s functionality with business organizational boundaries, rules, and data ownership. That way the microservice can be developed, deployed, and modified by its team with minimal interference with, or from, other microservice development teams. We use it to achieve the high level of granularity and separation of concerns that preserves microservice autonomy and independent deployability.

So, just how do we work this magic? As you will see, we work it by exploiting the ability of the actor model to honor the real organizational boundaries and rules of real business organizations.

Why Use the Actor Model?

Long before there was cloud computing, the actor model was specifically designed for and has been proven to meet the fundamental requirements of multi-cloud applications — concurrency, failover, and scaling. Serious cloud application developers are now putting it to work as a microservice component model to meet today’s needs.

Actor model microservices are stateless and reactive, handling input messages through input channels and sending messages and publishing events through output channels. All message and event IO passes through an actor’s attached mailbox.

Actors are close to an ideal component model for distributed microservices because:

  • Actor instances are reactive and execute rules, logic, and data transformations only when reacting to a message.
  • Actor instances are absolutely reentrant and stateless. They react to one message at a time and have no memory of previous messages processed. All data needed to react to a message must be in the message itself or in a persistent datastore.
  • Actor instances pass messages to other actor instances when they need them to do something.
  • Actor instances publish events when they need to tell interested parties about something.
  • An actor instance bounded by one context can pass messages to, or publish events for, actor instances bounded by another context — enabling it to use microservices developed, deployed, and maintained by other teams.
Figure 1: An Actor Model Microservice

The actor model creates reactive microservices whose attributes are discussed in-depth in The Reactive Manifesto. The 17-minute YouTube video, titled What Are Reactive Systems?, is an excellent explanation of the advantages of reactive systems.

How Big Should a Microservice Be?

The simple answer is as small as possible. But there are two parts to that answer:

  • The simple part is that to facilitate dynamic scaling and failover we need to keep containers small enough to minimize the computing resources and time required to start them. A microservice container start time should be measured in milliseconds not seconds. If it isn’t, it is probably too big.
  • The more complicated part is that to be autonomous and independently deployable a microservice must focus on implementing a single task (or set of closely related tasks) that is tightly bounded by a clearly defined context (data meaning, ownership and responsibility) within its problem domain. The more things an individual service tries to do or the broader its context, the less likely it is to be an effective microservice. If a microservice starts to get large, you should probably reevaluate its bounded context and the scope of its assigned tasks. Smaller is usually better.

DISCUSSION: Keep in mind that large, complex tasks can often be broken down into separate subtasks — each implemented by its own microservice. Depending upon their purpose, those new microservices can be invoked by message passing or by publishing events. It is useful to view a microservice more as a method, function, or subroutine invoked through a message or event— rather than as a traditional program.

Microservice Deployment Units

Many of the benefits of the microservice architectural pattern derive from the fine granularity with which its units can be implemented and deployed. Deploying and managing true microservices (as opposed to Spring Boot SOA services) requires the power of containerization and container orchestrators like Kubernetes.

In simple terms, a container is a virtualized executable image. That image can be pushed to a centralized container registry that Kubernetes uses to deploy container instances to a cloud cluster’s pods. The container registry concept dramatically simplifies composing pods from multiple small containers.

A pod can be viewed as a kind of wrapper for container instances. Each pod is given its own IP address with which it can interact with other pods within the cluster. Usually, a pod contains only one container. But a pod can contain multiple containers if those containers need to share resources. If there is more than one container in a pod, those containers communicate with one another via the localhost IP address.

When implementing the microservice architectural pattern a pod contains at a minimum one application container and one message orchestrator proxy sidecar container (to connect it to the rest of the application’s microservices). Frequently, a primary microservice container will be packaged in a pod with any subordinate microservice containers that it directly messages.

Figure 2: Typical Microservice Pod Composition

DISCUSSION: The term container should not be confused with a Web container (also known as a servlet container) which is the component of a Web server that interacts with Jakarta Servlets. The Web container creates servlet instances, loads and unloads servlets, creates and manages request and response objects, and performs other servlet-management tasks.

A common concern voiced about microservices is the runtime overhead of multiple containers and the latency of the connections between them. In practice, this is rarely a problem with properly designed microservices.

This is one of many benchmarks showing that containers perform better than VMs in terms of CPU performance, memory throughput, disk I/O, load testing, and operational speed. Containers are optimized for rapid startup of small executables. The messaging latency between microservices within the same cluster is negligible.

Types of Microservice Actors

Individual microservices can be described by the kinds of services they perform, very much like the tiers of traditional layered architecture systems — without the development, deployment, and runtime limitations of older architectures. Most of the microservice actors in an application system will fall into one of these categories and communicate with each other by message passing:

  • Application task actors implement discrete application tasks from the user’s perspective. They tend to be the most volatile actors and are the microservices most involved with application features. Task actors can access, create, and modify persistent data only through context handler actors.
  • Context handler actors are used by task actors for creating, reading, writing and deleting data through logical views representing specific domain bounded contexts. They can work with single resource handlers or multiple resource handlers. Context handlers present a logical executable model of a bounded context to task actor instances and interact with resource handlers to map, store, and retrieve data in the physical data model. Context handlers and their associated resource handlers do the heavy lifting of distributed data management (failover, scaling, replication, consistency) for the rest of an application’s actors.
  • Resource handler actors are used by context handlers as adapters to the persistent physical data model. They are used by context handlers to map resources to and from persistent storage, very much as an Object Relational Mapper such as Hibernate, maps objects to and from relational databases. Resource handlers are responsible for resource cache management. Resources are things that reside in non-volatile storage like files, key-value stores, and databases.
  • Message orchestrator actors are the wiring that connects individual actors by organizing messaging among them and by acting as circuit breakers to halt cascading error conditions. Orchestrators manage the failover, scaling, and self-organizing capabilities of the Cloud Actor Model. When a orchestrator is running, it broadcasts its presence to all other reachable orchestrators. Orchestrators are federated across cloud clusters and share state information with each other. A small orchestrator proxy lives as a sidecar in every actor pod to facilitate actor registration and message passing using the optimal of many potential orchestrators. Orchestrators take in messages addressed to a specific actor type and route them to the physical address of the optimum instance of that actor type. A mailbox is paired with each individual actor instance to buffer incoming messages for the actor and to send messages and communicate with orchestrator proxies on behalf of the actor instance. There is usually only one message orchestrator microservice actor class, but there are at least one (and usually more) running instances of the message orchestrator microservice in every cloud cluster.
  • Event publisher actors publish event messages through distributed streaming queue systems like Kafka or Apache ActiveMQ Artemis. They send event messages by topic to event queues. Publishers have no knowledge of their subscribers and leave the technical details of the streaming queues themselves to the queuing software.
  • Event handler actors subscribe to a message queue of a specified topic. The handler reads each message, in order, from the queue and forwards it to an appropriate actor instance.
  • WebSocket handler actors are WebSocket servers that accept requests and route them as messages to an appropriate task actor instance. If and when a WebSocket Handler receives a response message, it matches and sends it to the original requester. Timeouts are handled by the requester. WebSocket handlers are internal edge actors that enable access from outside a cloud cluster. A WebSocket handler can be used to implement secure full-duplex messaging into and out of external or browser-based applications but can downgrade a conversation to synchronous HTTP when necessary.
  • Cluster bridge actors move messages between cloud clusters, implement and enforce inter-cluster security, and optimize inter-cluster messaging. This is particularly important for Kafka performance.
  • Utility handler actors such as distributed loggers, error handlers, and transaction monitors perform utility functions in most microservice deployments.
  • Strangler Façade actors implement a rule-based façade for the Strangler Fig Pattern and can redirect legacy API calls to new cloud capable applications as mature applications are gradually transformed into multi-cloud applications.

All the advantages of the Layered Architectural Pattern are available through the judicious design of microservices — without the disadvantages of tight coupling or the need for monolithic deployments of large executables —all while enabling cloud-friendly automated deployment, scaling, and failover.

The Key Design Concept

The most critical element in effective microservice design and implementation is the domain bounded context. A bounded context must give the organizational unit, responsible for the context, the means to guarantee the integrity and security of that context’s data — while, at the same time, making it available for access and use by the microservices of other organizational units.

One way to accomplish that is to make a bounded context a first class actor in its own right. This means a microservice actor that implements a bounded context is created, deployed, and maintained by a development team responsible to the organizational unit that owns the data. All other microservices using that context can only access it through the context’s published microservice. That means that any microservice — created, deployed, and maintained by any development team — can only read and write the context’s persistent data using the rules defined and enforced by the data’s owners.

Figure 3: Sales Order Context Entities

The concept of the context handler helps to address two of the more complex challenges facing the designers of cloud-capable microservices:

  • By describing and implementing effective domain bounded contexts for microservice tasks, a context handler brings together the data — with the specifications, relationships, rules, and behaviors that make up the context — in order to create an individually deployable and reusable component.
  • By managing multiple instances of datastores distributed across multiple clusters for scaling and failover. Context handlers assume the responsibility for publishing the context changed events used by the context handlers in connected cloud clusters in order to synchronize their persistent datastore instances.

Passed Messages and Events

Microservices are loosely-coupled, communicating via message passing or events. That, along with being stateless and reentrant, enables them to function well as distributed components. From the perspective of an individual microservice, it makes no difference whether a message it is reacting to was a passed message or the payload of an event.The Event-Carried State Transfer (or ECST) pattern is useful for implementing that concept.

Using REST and ECST for messaging simplifies API design and implementation — while reducing the the number of individual message formats that must be documented and managed.

Microservice messages should be serializable so that they can easily move over networks. To avoid the complexities of distributed schema management, they should also be self-describing. To be secure, they should be encrypted and digitally signed. A combination of JSON, GZIP compression, and TLS can help to meet those requirements.

A message is passed when we want something done. An event is published to a topic when we need to report that something has happened. A passed message is handled by a single targeted actor. An event is handled by all actors who subscribe to its topic.

Figure 4: Microservices with Context Handlers

If we look at Figure 4: Microservices with Context Handlers, above, we can see that in Cloud Cluster 1 the Sales Order Context Actor reacts to an Add Sales Order message by updating two databases and publishing a Sales Order event with the Add Sales Order message as its payload. The Event Handler Actor in Cloud Cluster 2 subscribes to the Sales Order Added event, extracts its Add Sales Order message and passes it to its local Sales Order Context Actor, which in turn updates its two local databases — ultimately mirroring the changes to Cloud Cluster 1’s databases on Cloud Cluster 2.

The Sales Order Context Actor can be used by any microservice, written by any development team, that needs to implement a sales order-related task — while the sales order context data remains protected by the rules implemented by the Sales Order Context Actor.

Synchronous Versus Asynchronous Messaging

Microservices communicate through message passing, rather than by function calls, method calls, or remote procedure calls. For many reasons — some relating to the impact that messaging might have on system performance — messaging is asynchronous. An actor instance does not wait for responses so it does not hold onto executable threads. That’s great for performance, but in practical terms much of what we do with computers involves requests and responses. How can we handle that?

Figure 5: Advantages of Pseudo-Synchronous Messaging

One way is by using pseudo-synchronous messaging (Figure 5: Advantages of Pseudo-Synchronous Messaging) where the responding actor instance sends a response type message to a requesting actor type instance which will use a secondary channel to process that message type. Because actor instances are stateless, any instance of the requesting actor type can do the job and no one has to wait for a response. It is called pseudo-synchronous messaging because it looks exactly like a normal request-response from the user’s perspective but its mechanics are purely asynchronous.

The Message Channel Construct

An input channel is an input message stream with attached message processing logic that is used by an actor to react to an individual message type and category. The type is an application-specific message format. The categories come in three types:

  • Task category messages invoke the actor’s primary task logic. They are analogous to traditional API calls. A task message may also be the payload of an event.
  • Response category messages convey a successful response (with data) from another actor.
  • Error category messages convey an error response (with data) from another actor. The error may be an application error or a message delivery failure.

An output channel is an output message stream that an actor uses to:

  • Send a task message to a specific actor type.
  • Publish an event message to a specific event topic.
  • Send a response message to a specific actor type.
  • Send an error message to a specific actor type.

Channels enable reactive logic and facilitate asynchronous, pseudo-synchronous, and event messaging — while insulating actors from specific I/O and queuing technologies.

Observability

There is no such thing as a free lunch. Software architecture is all about tradeoffs. In return for the benefits of the microservice architectural pattern, we must effectively configure and manage more individual components and more connections between components.

Observability is an important attribute of microservices implementations because it provides more tools for managing complexity. There are two levels of observability:

  • The first is the ability to monitor underlying system resources such as CPUs, memory, databases, and networks. These measurements require system level tools, as opposed to application-level observability.
  • The second is the ability to observe the actual interaction and performance of application components. This observability must be designed and built into the applications themselves — by logging messages, events, and errors so that their performance can be compared to desired performance specifications.

We use the ability to observe the behavior of application components in order to:

  • Implement dynamic self-organization, failover, and scaling.
  • Identify weaknesses and bottlenecks resulting from application design, architectural, or system resource issues.
  • Replay application events to restore application state after system failures.

We use the ability to monitor system resources in order to:

  • Understand and resolve system resource availability and tuning issues.
  • Debug application component issues.

Self-Organization

The microservice architectural pattern tends to generate more individual components than most other architectural approaches. Because of this, complexity is the primary limiting factor in successful microservices implementations. As the number of things (microservices, resources) and connections between them grows, complexity increases nonlinearly [ c = n(n-1)/2) ]. Top-down hierarchical controls, as implemented in most systems, are ill-suited to cope with this complexity. A better solution is needed.

We can create increasingly large and complex applications, integrating and operating on data spread across the cloud — if only we can manage them. Most of the working systems of that complexity occur in the natural world. We need to look at self-organizing systems, the way nature copes with complexity. Self-organizing systems emerge from bottom-up interactions, unlike top-down hierarchical systems, which are not self-organizing.

Message orchestrators can be used implement dynamic coupling among federated microservices (which, upon starting up, register with their nearest message orchestrator). In this way control is distributed over whole systems and all parts contribute to the resulting functionality, as opposed to centralized structures that are often dependent upon a single coordinating entity. An individual microservice does not need to know the network addresses of any other microservices with whom it communicates, only the IP address of the message orchestrator with whom it has registered.

This decentralized structure, inherent to self-organizing systems, gives them resiliency and robustness. When any element fails it can easily be replaced by a like element. A successful cloud-native architecture mimics the decentralized structure of organic, living systems where complex capabilities can emerge from the interaction of relatively simple parts — while at the same time minimizing the complexities of configuration and deployment.

Containers and Container Orchestration

Without containers and container orchestration, the overhead of deploying, managing, and running the many microservices needed to implement even a small application would be prohibitive. Many of the criticisms of the microservice architectural pattern would be legitimate if it weren’t for containers and container orchestration. To design effective microservices we have to understand this enabling technology.

These are the main technologies upon which practical microservices depend:

  • Containerization is a virtualization method to run distributed applications in containers using microservices. Containerizing an application requires a base image that can be used to create an instance of a container. Once an application’s image exists, one can push it to a centralized container registry that Kubernetes can use to deploy container instances in a cluster’s pods.
  • Pods are the smallest unit of the Kubernetes architecture, and can be viewed as a kind of wrapper for a container. Each Pod is given its own IP address with which it can interact with other Pods within the cluster. Usually, a Pod contains only one container, but a Pod can contain multiple containers if those containers need to share resources. If there is more than one container in a Pod, these containers can communicate with one another via localhost.
  • Services group identical Pods together to provide a consistent means of accessing them. For instance, one might have three Pods that are all serving a website, and all of those Pods need to be accessible on port 80. A Service can ensure that all of the Pods are accessible at that port, and can load balance traffic between those Pods. Additionally, a Service can allow an application to be accessible from the internet. Each Service is given an IP address and a corresponding local DNS entry. Additionally, Services exist across Nodes. If you have two replica Pods on one Node and an additional replica Pod on another Node, the service can include all three Pods.
  • Deployments have the ability to keep a defined number of replica Pods up and running. A Deployment can also update those Pods to resemble the desired state by means of rolling updates. For example, if one wanted to update a container image to a newer version, one would create a Deployment, and the controller would update the container images one by one until the desired state is achieved. This ensures that there is no downtime when updating or altering Pods.

Wrapping Up

At its core, the microservice architectural pattern is a distributed component model overlayed on a communications model. It exhibits both the advantages and tradeoffs of such systems. It is not a particularly difficult model to learn and use, but it is different and new to most developers — and it is plagued by excessive hype and misconceptions.

Enjoying its very real advantages requires understanding its constraints and having the engineering discipline to obey them. That takes study and practice. Once you get there, microservices are quick and easy to use and you’ll wonder why you ever did it any other way. If you want to dig into a little more detail, you might want to take a look at the Anatomy of a Microservice. The 18-minute YouTube Video, titled The Problem with Microservices is also well worth viewing.

Thank you.

Click this link for Microservice Design Objectives

--

--

Dick Dowdell
Nerd For Tech

A former US Army officer with a wonderful wife and family, I’m a software architect and engineer who has been building software systems for 50 years.