The wonderful journey of a monolitich API that wants to be a serverless

Roberto Valletta
Cortilia Team Blog
Published in
4 min readJun 30, 2020

Once upon a time there was an API that lived beneath a monolithic system sea bottom. One day the API went to the surface and discovered the Serverless Architecture island. This world was prosperous, full of colors and components never seen before.
In Cortilia we guided this API through an adventurous journey that now we are going to tell you.

These are the chapters of our fairy tale:

  1. Chapter One: The Lion, the Witch and the Serverless;
  2. Chapter Two: Bitbucket pipeline and the three bears;
  3. Chapter Three: Api Gateway in wonderland;
  4. Chapter Four:The Story of a data lake and The Cat Who Taught Her To fly;
  5. Chapter Five: everybody lived happily ever after…

This article is the Chapter One, stay tuned! and follow us in this adventure!

Are you ready for adventure, Captain?

Chapter 1

The Fairy Tale Hero is a monolithic old style API. We would like to invest some time to show you how we:

  • Split up the monolith to a scalable, reliable serverless architecture;
  • Move BI data saving to data lake;
  • Use Devops culture to better control deployment and development processes;

Here the actual API flowchart:

Monolitch API flow chart
Monolitich API Flow Chart

This is the Flowchart of the realtime API.

In addition:

  • There is a job that moves data from the primary database to a Data Warehouse used by the BI platform;
  • The API is called internally to trace some subcalls;
  • Some processes write directly on the database tables;

Now we are going to take you on a fantastic journey together, like Alice we are going to bring our monolitch API in Wonderland.

Let’s start!

Cortilia’s servers live in Frankfurt AWS Region, so we used the AWS out-of-the-box components to build our Serverless.

The final flowchart will be, more or less, like this:

The architecture is quite linear:

  1. The AWS API Gateway is triggered by a POST;
  2. An authorization service is called (we will go deep on it in the following chapters);
  3. The publisher lambda gets the data and publishes it on an SNS topic;
  4. The consumer lambda gets the data from SNS and writes it on S3;
It Could Work!

The focus of this chapter is: Infrastructure as Code and the two lambdas.

We use AWS SAM (Serverless Application Model) that is a kind of “fast start” for the serverless and IaC components that behind the scenes use CloudFormation to create the underlying infrastructure.

The IaC is needed to increase delivery speed in a more reliable way and, in addition, gives us the ability to manage the infrastructure using a single configuration, descriptive, versioned file.

Here are some pros of IaC:

  • Rapid delivery of high value system;
  • Easy way to change the infrastructure;
  • Reliability, security, visibility and control;
  • Easily identify the “source of truth”;

In the “cloud age” the IaC is a primary tool. Moreover we need to avoid any manual setup of the infrastructure.

Here the YAML file which defines the Infrastructure:

This file (template.yml) is responsible for the creation of the following components:

  1. A POST endpoint, the API Gateway that triggers the Publisher Lambda;
  2. An S3 Bucket;
  3. The SNS Topic that triggers the Subscriber Lambda;
  4. The Subscriber Lambda written in NodeJs Environment;
  5. The Consumer Lambda written in NodeJs Environment;
  6. All IAM roles.

With the intention to stick to the AWS best practices, we define the minimum set of roles needed to run the application.

Now it’s time to introduce the main characters of this chapter:

  • Publisher Lambda;
  • Subscriber Lambda;

The Publisher Lambda function should:

  1. Validate data passed by the Api Gateway;
  2. Send it to SNS.

We added a simple data validation with Joi (https://github.com/hapijs/joi). If the POST fails, HTTP status code 400 is returned in order to inform the client that the data has not been processed.

If data validation succeeds the lambda publishes the data into the SNS Topic.

Once the data is on the SNS Topic it’s time to create the subscriber which has to:

  1. Do data validation
  2. Write on S3 bucket

The subscriber is quite simple: it validates the data (we should not assume that all the data content in the SNS Topic can be trusted) and writes the JSON file in an S3 bucket, using the unique SNS MessageId as filename.

At this point in order to run the application just executes “sam build” on the console and then “sam deploy”… and abracadabra! The app is built and deployed on the AWS ecosystem.

In the next chapters we will travel across the tropical jungle of serverless architectures, trying to face nice characters like CI / CD, data lake and authentication.

Stay tuned!

Suggested soundtrack: St. Thomas, Sonny Rollins, 1957

Links

Source code: https://bitbucket.org/cortilia/lambda-publisher-subscriber/src

--

--