Breaking Down the Serverless Monolith

Adopting serverless technologies to deliver your business applications does not protect you from building a monolith. We’ll take a look at how a hard limit brought this reality home and the approach we’ve taken to break down our serverless monolith.

Jon Vines
5 min readDec 4, 2018

Throughout the last six months, we’ve been developing and deploying our new applications almost exclusively using a serverless approach. This has brought with it a number of advantages, including those usually associated with adopting serverless, namely:

  • No servers to manage and lower infrastructure burden
  • Massively reduced cost
  • Speed to delivery

Despite the many obvious benefits, we’ve come across a few issues as our applications grew and the complexity of our use cases became more apparent. Throughout this time, our approach has become increasingly event-driven with a tendency to single responsibility functions (as it should be, I know). Our application was growing and with it our Serverless config. Eventually, our Serverless config got way too big and we hit the CloudFormation resource limit of 200.

Application Growth

Our application started very simply, we had a need to capture requests via an API endpoint. It soon became apparent that there were at least two more request types and that a front-end was needed. Further to this, we then had to start managing the request life-cycle, including managing properties of the request and notifying teams of specific actions that needed undertaking.

Simplified example showing one request type

We found ourselves with a Serverless config describing a number of use cases. We were creating and managing three request types, we were notifying specific teams at certain points in the request life-cycle and we were producing data dumps for further processing. Even within this description, we can identify five boundaries where it would make sense to start to split up our master Serverless config.

Split from master Serverless to multiple Serverless configs

Taking this approach meant we had to overcome a number of challenges related to our use of the API Gateway. When deploying from the master serverless file, we deployed to one API Gateway, meaning we had one URL and one API key that the front-end client could use. Now that we’ve split into five separate serverless files, we have potentially five API Gateways to contend with. To further complicate things, each was now generating its own API key.

Multiple services

One of the great things about the serverless framework is the extensibility that has been allowed through the plugin ecosystem. It was here that we found the answers to the problems we were facing. We also found this great resource on How to deploy multiple micro-services under one API domain with Serverless by Alex DeBrie. Without going into the detail described in the post, we used the serverless-domain-manager plugin which allows us to utilise the API Gateway’s base path mapping feature to deploy multiple services to the same domain name.

Representation of how using serverless-domain-manager allows us to access multiple APIGateways with one domain name

This approach allowed us to split our application by behaviour, following the rules of Domain Driven Design and applying strict domain boundaries through bounded contexts. It also meant that any change we made was within a very small context, which reduced the blast radius for any deployment in any given part of the application. Our base path is now our route into our bounded context within the wider application. Another great benefit is that by defining our stack this way, all other AWS infrastructure is defined alongside our compute.

Our next challenge lay in sharing API keys across the API Gateway’s we’ve created. Once again, a Serverless plugin came to our rescue here, serverless-add-api-key. Usage of this plugin was really simple, if the API key exists, then it creates a usage plan for the API Gateway being deployed and associates it with the API key.

At this point, we’d overcome our two major challenges of breaking down our Serverless application. In the process, we’d massively simplified our application and we feel we’re more closely aligned with what exactly a Serverless microservices architecture should look like.

Deployments

Whilst we’re very happy with how the services have been broken down, we’re now faced with the challenges of deployment. We effectively have a mono-repository of related services. Each deployment will trigger the deployment of each service.

We’re left with two options:

  1. We split each service into an individual repository, effectively allowing independent releases. This would mean duplication of build scripts and maintaining multiple repositories. The link between the services would also not be as obvious and would require more rigorous documentation of the relationships.
  2. We build in the capability to detect whether the service has indeed changed during deployment. We believe we can do this by comparing the hash of the zip with the currently deployed service. In the case the hash is different, we deploy the service.

Our current thinking is to adopt the second choice. This will likely lead to a follow-on blog post.

What have we done?

Perhaps our biggest learning is that by reaching the limit of CloudFormation resources in our Serverless config, it gave us a natural and hard point at where we should be breaking down our application further. When to break down an application into smaller services, a monolith to microservices if you will, has always been a judgement call. Split too early and we introduce unneeded complexity. Split too late and we’ve increased the risk of coupling across domain boundaries.

Now we’ve come up against the hard limits of what a serverless microservices architecture looks like, we’re left thinking about what the soft limits are. How many functions are too many? This is the key question to ask yourself, as all other resources hang off the back of the number of functions we define, queues, storage and other services are not much use without the compute that drives them.

By using only two Serverless plugins, we’ve been able to adopt a clean way to break down our application architecture further, reduced dependencies between services, reduced the blast radius of deployments (and incidents) all with minimal impact to the front-end already using these back-end functions.

We will carry the lessons we’ve learned into building any new applications early. We will likely build with the resource limit in mind so that when it’s time to break down, it’s a simple case of amending our deployment scripts and pointing to the new functions. The relative ease of the migration to the smaller services has further vindicated our decision of adopting serverless as a means to delivering software applications.

--

--

Jon Vines

Software Engineer and Team Lead at AO.com. Aspiring DevOps practitioner. Thoughts are my own.