Serverless: Applying Hexagonal Architecture at Multiple Levels

Nano, Micro and Macro

John Gilbert
10 min readMay 12, 2024

Information hiding is the cornerstone of flexible software architecture. We hide the implementation details so that we can change the implementation without forcing clients to change.

Multiple mental models

There are many techniques that help us achieve information hiding, but we also need guidance on what we should hide and how we should structure our software to make the details substitutable. This is where Hexagonal Architecture comes in and provides us with a mental model of the shape of our software and how the pieces fit together.

But serverless turns everything inside out!!! We no longer have a monolith that we must protect from itself. Instead we need to herd many fine-grained resources. This is an opportunity to apply hexagonal concepts at multiple levels to control the potential chaos.

We will employ three mental models, all building on the same concepts, at different levels: nano, micro, and macro. Let’s start at the nano or function-level.

Nano architecture — Function-level

Functions are the obvious place to start, because this is where the code lives, so the concepts should feel familiar.

Hexagonal architecture (aka ports & adapters) gets its name from the shape we use to draw the layers of the architecture. It is just a convenient shape that makes for more economical diagrams. The layers are what’s important.

A diagram represents a given module and its context. At the nano level a diagram represents a function, such as an AWS Lambda function, and the resources it interacts with.

We read these diagrams from left to right. The left is the driving side and the right is the driven side. In other words, left to right denotes the order of execution, with things on the left invoking things on the right, and so forth.

Function-level mental model

We divide the diagram into two layers: the model and adapter layers. The model lives at the center of a diagram. It represents the domain or purpose of the module. The outer adapter layer hides the information about the external dependencies.

We define interfaces (aka ports) on the model to decouple it from external dependencies. These include inbound interfaces for driving the model and outbound interfaces to things driven by the model. Then we use different adapters for different execution environments.

By convention, at the nano level, I refer to a driving adapter as a handler and a driven adapter as a connector. Let’s look at some code samples.

Model

A model contains the business logic and is decoupled from external dependencies.

export default class Model {
constructor(connector) {
this.connector = connector;
}
get(id) {
return this.connector.get(id);
};
query(name) { ... };
save(id, input) { ... };
delete(id) { ... };
}

I implement a model as a class and the methods define the inbound ports.

I use simple constructor-based dependency injection to pass in the connectors that satisfy the outbound ports.

Here is a full example: https://github.com/jgilbert01/templates/blob/master/template-bff-service/src/models/thing.js

Connector

A connector adapts an outbound interface of the model to an external API.

class Connector {
constructor( tableName ) {
this.tableName = tableName;
this.client = DynamoDBDocumentClient.from(new DynamoDBClient({ ... });
}

get(id) {
const params = {
TableName: this.tableName,
KeyConditionExpression: '#pk = :id',
...
};
return this.client.send(new QueryCommand(params))
.then((data) => data.Items);
}
...
}

This is a thin wrapper that hides the details of making remote calls to a resource, such as a DynamoDB table.

We get the primary benefit of connectors when we unit test the business logic of a model, because it is much easier to mock a connector then the external call.

Connectors also made it much easier to upgrade to version 3 of the aws-sdk, because all the details were hidden in these reusable connectors.

Here is a full example: https://github.com/jgilbert01/templates/blob/master/template-bff-service/src/connectors/dynamodb.js

Handler

A handler adapts the request/response format of the FAAS provider, such as AWS Lambda, to the inbound interface of the model.

import Connector from '../connectors/dynamodb';
import Model from '../models/thing';

const api = require('lambda-api')();
api.app({
models: {
thing: new Model(new Connector(process.env.TABLE_NAME)),
}
});

api.get('/things/:id', (req, resp) => req.namespace.models.thing
.get(req.params.id)
.then((data) => resp.status(200).json(data)));

export const handle = async (req, ctx) =>
api.run(req, ctx);

The handler code is usually very boilerplate. It initializes the models and connectors and then leverages libraries, such as lambda-api or aws-lambda-stream to do the heavy lifting.

Again, it is easier to test all the permutations of the model business logic independent of the handler code and runtime dependencies.

You could have a Rest handler and a GraphQL handler, each using the same models and connectors.

lambda-api allows you to easily switch between AWS API Gateway or AWS Application Load Balancer. aws-lambda-stream allows you to easily switch between AWS Kinesis or AWS SQS or any other asynchronous channel supported by AWS Lambda.

If you need to switch a high volume function from AWS Lambda to AWS Fargate, then you only need to change out the handler.

Here is a full example: https://github.com/jgilbert01/templates/blob/master/template-bff-service/src/rest/index.js

Now, let’s bump this up a notch to the micro or service-level.

Micro architecture — Service-level

We group related functions into services (aka autonomous services, aka event-driven microservices). We will build on our nano architecture to create a service-level micro architecture. At this level we will see that the functions act as the adapters.

Service-level mental model

I have covered the autonomous service patterns previously, here and here.

This service-level diagram represents a BFF service that allows users to interact with domain data to perform a specific activity. It provides what I refer to as a Trilateral API, with an inbound asynchronous API, a synchronous API, and an outbound asynchronous API. In hexagonal terminology these are the ports of the service.

Let’s dissect this further.

Model (entities)

In the nano architecture we saw that the model is implemented as one or more classes. In our micro architecture, all the functions that make up a service live in the same repository. This means that we can (and should) share these model classes across the functions of a service.

Ultimately, to share the data of a service across the functions we need to store the data in a datastore, such as DynamoDB. This datastore is owned by and only used by the service, and nothing else. This means that we can (and should) optimize the structure of the datastore to support the specific needs of the domain model of the service.

Thus, at the service-level we will think of the entities datastore as the model.

Asynchronous Inbound Interface (listener)

The listener function is a stream processor that consumes upstream events and stores the needed data as a materialized view in the model. It is acting as an inbound adapter to hide the details about how upstream data gets into the service.

The upstream domain events drive the listener function which maps (adapts) the incoming data into the lean data format needed by the synchronous API. If the upstream data model changes then only the mapping logic in the listener function needs to change.

If we need to change the type of channel we use to receive the events then we only need to change the handler for the listener function, like we covered in the nano architecture section.

Here is an example: https://github.com/jgilbert01/templates/blob/master/template-bff-service/src/listener/index.js

You can find more on creating stream processors here.

Synchronous Interface (rest)

The rest function of a BFF service supports a specific user activity in the frontend. It provides queries to retrieve data and commands to mutate data.

The listener function has optimized the materialized views so that the queries have to do very little transformations. Any commands will store the data in an optimal format as well.

So, as we saw in the nano architecture section, the rest function is acting as adapter between the API Gateway and the model. If we choose to switch from the API Gateway to another technology then we only need to change the handler.

Here is an example: https://github.com/jgilbert01/templates/blob/master/template-bff-service/src/rest/index.js

Asynchronous Outbound Interface (trigger)

The trigger function is a stream processor that consumes change events from the model’s change data capture (CDC) stream, such as DynamoDB Streams, and publishes domain events to the event bus. It is acting as an outbound adapter to hide the details about how events get out of the service.

The trigger function is driven by the commands executed through the rest function. It maps (adapts) the internal data format to the format of the domain events that will be consumed downstream. Again, if the format of the domain events changes then we only need to change the mapping logic in the trigger function.

Here is an example: https://github.com/jgilbert01/templates/blob/master/template-bff-service/src/trigger/index.js

CPCQ Flow

“Look on every exit as being an entrance somewhere else”

The Trilateral APIs of the micro architecture allow us to create a chain reaction between microservices that I refer to as the Command, Publish, Consume, Query (CPCQ) Flow.

CPCQ Flow

A user executes a command via the rest function in one microservice and the trigger function publishes the outcome as a domain event. Then listeners of downstream microservices consume the event and materialize the data so it can be queried by other users. This flow can continue ad infinitum to create arbitrarily complex systems.

Now, let’s bump this up another notch to the macro or subsystem-level.

Macro architecture — Subsystem-level

We group related services into subsystems (aka autonomous subsystems). We will build on our micro architecture to create a subsystem-level macro architecture. At this level we will see that a special category of services act as the adapters.

Subsystem-level mental model

This diagrams represents a single subsystem of a larger system. Each subsystem runs in its own cloud account and we connect subsystems via domain events, as I have covered here.

A subsystem-level diagram depicts all of the services within the subsystem. Note that we are not cluttering this high-level diagram with detailed event flows. We take as a given that each subsystem has an event hub/bus and events are flowing from left to right. Leftward (aka upstream) services are driving the rightward (aka downstream) services. In hexagonal terminology the domain events are the ports.

Let’s dissect this further.

Model (core)

At the subsystem level a core set of services compose the domain model of the subsystem. The user-facing BFF services implement the bulk of the model. The CPCQ Flow drives domain events through the subsystem. We can rely on choreography to define the flows, but we can also add Control (CTL) services to orchestrate the flows.

These BFF and Control services do not interact with resources outside of the subsystem. We hide that information in an anti-corruption layer (aka adapter layer).

Anti-corruption Layer

At the macro level, external dependencies are actually external systems. These can be 3rd party services, legacy systems, and other subsystems. We have little to no control over how and when these external systems may change. To emphasize this fact we promote the adapter layer to the level of an anti-corruption layer.

A set of External Service Gateway (ESG) services act as the adapters that bridge the gaps between the systems. Ingress gateways drive events into the subsystems. Egress gateways are driven by the model to send events out of the subsystem.

When we want to change from one 3rd party service to another we substitute a new ESG service that adapts the new 3rd party to our domain events. When we finally strangle a legacy system we just decommission those ESGs. When one subsystem needs to evolve faster than another the ESGs maintain backwards compatibility.

Integrating with external systems can be messy and every integration is unique. The number of ESG services can easily dwarf the number of services in the core model. This anti-corruption layer hides all these details and keeps to core services clean.

Summary

As you can see there is more than one way to apply the concepts of hexagonal architecture. At each level of granularity (nano, micro and macro) we have similar but different mental models that build on each other to help us create flexible, evolutionary systems at enterprise scale.

I identified the type of information we should hide at each level to allow us to change a system as our knowledge grows and evolves. The architecture also embraces the Serverless-First philosophy and I highlighted where we can switch from functions to containers on a function-by-function basis.

Testability is a main goal of hexagonal architecture. In a follow on post I will discuss how hexagonal facilities my take on the Serverless Testing Honeycomb.

--

--

John Gilbert

Author, CTO, Full-Stack Cloud-Native Architect, Serverless-First Advocate