The Rise of the Serverless Monoliths

David Bottiau
6 min readFeb 8, 2023

Over the past few decades, we have seen application architectures evolve at a rapid pace. When I was a young developer, I myself started writing a simple code base that we can call a monolith. I remember writing some HTML/CSS for the front-end and some PHP on the back-end. Those were the good old days, but then came the time and the need for distributed architectures, which we can now call microservices.

The evolution of software architecture, by Benoit Hediard

The fall of monoliths

I am not going to paint you a picture about the fact that monoliths have become less and less desirable and a lot of developers have started to preach about microservices. Because microservices offer these benefits:

  • They are much smaller, making them easier to maintain.
  • They reduce friction between teams. This means that each team can work on each microservice separately.
  • They are faster to write (there is no need to follow an existing and sometimes tedious architecture).
  • They allow teams to use the best tool for the job (e.g. working with lots of json data? Maybe use node.js. Need for high performance? Maybe consider Rust. Only have Ruby developers? Then Ruby seems to be the solution.).
  • They reduce cognitive load, meaning that each developer only needs to know a subset of the code, rather than the entire code base.

The myths about microservices

However, regular and sometimes extended use of microservices has some drawbacks:

  • Some code (data or functions) is duplicated between multiple repositories, which always leads to the shared library vs. mono-repo debate.
  • Handling transactions between multiple microservices can be challenging and requires additional patterns on top of it (Saga, Event Sourcing, etc…), whether it is accidental complexity or necessary complexity.
  • Depending on the context, they can add hugely to the cognitive load, meaning that each developer needs to know what the microservice can/should do, but also what other microservices it can/should communicate with.
  • And in almost every scenario, you are more vulnerable to failures: database connections, network latency, caching, exceptions, etc…

But as any wise developer will tell you, the answer to any architectural choice is always “it depends”.

So this monolith vs. microservices debate made me realise that in a well-decoupled architecture we “only” have to deal with 4 different parts at most:

  1. The UI, also known as the front-end
  2. The BFF, the Backend For Frontend, or should I say the Backend for a Single Frontend (BSF)
  3. A traditional backend to be the glue between the frontend and the data. I would opt for BFD a.k.a. Backend For Database, or should I say the Backend for multiple BSF
  4. The DB also known as the database and its query mechanism
Representation of a fully decoupled microservices architecture

From this familiar pattern, we already have the appropriate tech stacks:

  1. A frontend framework (Angular, React, Vue, Svelte, etc…)
  2. A BFF using the appropriate technology (a simple REST API? a GraphQL server in node.js?)
  3. A traditional backend (let’s call it BFD for now) using again the appropriate technology (another REST API? a high-performance gRPC server?)
  4. And finally the minimum number of databases required (relational database and/or document database and/or graph database and/or search engine)

If we care about simplicity, I think there is room for improvement. We should also agree on how much of each part of the tech stack we need:

  1. At least one front-end, but you can scale that number infinitely, whether you are writing a micro-front-end or a myriad of web applications, or both
  2. One front-end = one BFF if we follow the logic
  3. A traditional backend that you can split into N microservices if you like. But let’s just say 1 if we are going for a monolith.
  4. At least one database per type. Let’s say we need 3 types of databases for a medium sized application.
Representation of an architecture with a Backend For Database (BFD) monolith
N = (2 * UI) + (1 * BFD) + (3 * DB) 

Again, as the saying goes, “less is more”, so our aim is to try and reduce this number (N) to the absolute minimum.

Keep It Simple, Stupid.

Entering the era of Serverless Monoliths

The frontend meta-framework generation

An incredible evolution that we have seen in the past has been the creation of several front-end meta-frameworks. The most notable are Next.js, Remix and SvelteKit. The goal of a meta-framework is to handle both the front-end and the back-end sides of the front-end (yeah, that does not sound smart when you say it like that). In other words, it means building UI + BFF with a single technology.

And thanks to today’s Cloud and hosting solutions like Vercel, we can easily deploy meta-frameworks in a Serverless mode.

Representation of BFD and meta-framework monoliths architecture
N = META-FRAMEWORK + (1 * BFD) + (3 * DB)

From there, we reduced the number of technologies by 1… for each front-end!

The Serverless database generation

There is also a trend right now for solutions around Database as a Service (DaaS) or I should say Backend as a Service (BaaS). The goal of a BaaS is to provide all the functionality that your application needs so that you don’t have to write a single line of code on the backend. All you have to do is write queries inside your BFF and voilà.

The best known BaaS is undoubtedly Firebase, which offers a myriad of features such as a real-time document database, an authentication service, a permissions mechanism on top of the database, a file system storage, and so on…

However, Firebase has some serious limitations:

  • The Firebase database, whether it is the Realtime database or Firestore, is a single-model database (a document database)
  • It can only be traversed as a one-way graph (if we can think of it as a graph at all)

There is also Supabase, another famous BaaS trying to match Firebase. Using a relational database like PostgreSQL removes some of the limitations of Firebase, but it is still a single-model database…

One project that has caught my attention recently is SurrealDB. It is a database with a built-in backend that has many, many features (and I think I did not write “many” enough). Being a true multi-model database and with a new kind of query language, they are able to provide features that should have forced you to write some code.

In recent times, this type of database has become more and more widely known as a meta-database.

Representation of a fully monoliths architecture

And from there we drastically reduced the number of technologies at another level.

Bonus: Leverage the mono-repo architecture

As with microservices, writing monoliths means having the right toolbox. A toolbox that removes the constraints we usually deal with, such as:

  • Too big to fail, one simple mistake can bring down the entire service
  • Long deployments, compiling a large project often takes a long time
  • A single code base that you can’t isolate and share across teams

With this architecture, the need for a pure and holistic monolith (front-end + back-end) is out of the equation. However, the meta-framework is the part where more than 80% of the code will essentially reside. And for that, there are now tools you can use, such as turborepo.

One thing we have not mentioned is the inevitable need for script migration for the database. Of course, these scripts need to be stored in a separate repository. Nothing fancy here.

In conclusion, we achieve the ability to consume only 2 technologies in our tech stack. First, a meta-framework to handle the front-end logic, and then a single “framework” to handle the back-end.

It all makes sense because you have the data on one side and the user interface on the other.

At the moment, I do not think it is a viable decision to have only one solution to do everything. If you think otherwise, please let me know in the comments.



David Bottiau

Software designer, open-minded, Lean being and UX advocate.