Serverless Computing : Introduction


Vipin Kumar

Before directly going into serverless, I’ll start with from monolithic application architecture design and together we’ll transition to microservices and then into serverless which enables you to establish a logical comparison among serverless and other architectural design patterns. So which one to choose, It totally depends on the resource and requirement at hand. As there is no single bullet for every solution. So let’s start.

Monolithic Architecture

In monolithic an application is developed as a single unit and may have multiple responsibilities. For example a shopping store, where all the components like authentication module, cart module, catalog module, payment module etc all built together as a single unit.

Deployment and code distribution is easy as there is only one single unit which can be moved around easily. But it also presents some basic problems like modularity, code maintenance, scalability etc.

Reason for introducing monolithic is to discuss container. As our shopping store will be a web application so it will be deployed in an application server container, and also shopping store app will have its own execution boundary within the environment container like JVM. So the point is our web application needs an ecosystem or container where it can execute and serve the request. And that ecosystem is provided by an application server.

Microservice Architecture

After facing challenges in monolithic applications we have to think out some new architecture, strategy. This new design pattern is evolved by considering modularity. Each module will have only a single responsibility and it does not care about the responsibilities of any other module. And each module will represent a small service here so this design pattern is called microservice architecture.

This gives great flexibility in terms of scaling up and scaling down any application, maintenance etc but also brings some challenges in deployment, API management, and service discovery if the number of services is grown to a logical number.

We have seen a single ecosystem is divided into multiple small ecosystems to provide life support to an individual module. This small ecosystem for a module is also called a container.


So from the above discussion, we came to know that a monolith can be divided into microservices architecture if scope and boundary of individual modules are known. But now let’s think of a very high level. We have a service which contains certain API, functionalities, responsibilities. So before writing that functionality, we have to setup an application execution environments, properties, configurations etc to correctly initialize the container or ecosystem to execute the functionality. But the actual basic functionality is written only in those functions or methods which are doing actual work.

If we break this microservice even further and take out all the individual functions and execute them separately by creating their own container or ecosystem. This is the transition from monolith to individual functions.

Let’s take an example to better understand this transition because this is important to establish further concepts.

Let’s consider there is a blog service which has below functionality


If we take out all the blog service function from there designed ecosystem to in another ecosystem altogether or even can put into separate container per function.

So we have broken down the blog service into pieces of its functions and how those functions can be executed separately anywhere within their supported ecosystem.


So in containerization, every function, service, an application will execute in its own designated boundary/scope. Containerisation provide a sense of virtualization over a single host operating system. Docker is a good example of containerization tool.

Now we have acquired enough concept knowledge to dive right into the serverless computing

Serverless Computing

First thing first, let’s give you something to think about. In the above monolith and microservices architectures, we will have some dedicated infrastructure, servers to execute the services and providing the desired functionality. So we have constant running infrastructure until now. But in serverless literally there is no constant running infrastructure, servers.

Ok, That’s weird but how the functionality works.

Here comes the real secret

BaaS : Backend As a Service

Many third-party services that implement required functionality, these third-party services provide server-side logic and manage their internal states, which led to applications that do not have application-specific, server-side logic and use third-party services for everything. Such applications are serverless and using entire Backend as a Service.

Examples of third-party services like authentication service (Auth0, AWS Cognito), logging service (Loggly, Logsense) and analytics as a service (Amazon Kinesis, Keen IO).

FaaS : Function As a Service

When an application requires some server-side logic, FaaS can be used, FaaS are short-living stateless functions that can be triggered by events and can communicate with each other and even provide APIs to the external world.

A FaaS provider does the rest — provisioning as many instances of functions as necessary to handle all of the requests, Terminate instances that are no longer required, Monitor all of them and provide identity and logging services.

Eg. Auth0 Webtask, AWS Lambda, Google Cloud Functions and Azure Functions

  • Every logic will be in the form on functions
  • Code will be on cloud
  • No dedicated infrastructure in the beginning
  • Providers like AWS, Azure, Google Cloud have different naming conventions. Examples, AWS — Lambda, Google Cloud — Google Functions, Azure — Azure Functions
  • Infrastructure will be set up and made available on very first request, event etc.
  • Once processing is completed, entire infrastructure will be destroyed.
  • No idle time for servers and resources.
  • More Precisely when a request comes a container is initialized to execute the code along with infrastructure and destroyed after request completion. So a separate container per request.

Let’s revisit our blog service and check the shape of serverless architecture now.

Serverless Cmputing Benefits and Challanges


  • Low operational cost
  • Scaling is efficient — Auto scaling
  • Less packaging and deployment complexity
  • Less developer required
  • Self managed infrastructure
  • Develop and Market prototype in less or no time.


  • Vendor Lock-in and Vendor Control
  • Security concerns
  • Code repetition on multiple client platforms in case of full BaaS
  • No control on server optimisation
  • No state for serverless FaaS
  • DoS Problem
  • AWS Lambda Execution duration > 5 min, Then abort the function
  • Startup Latency → AWS claims it is less than 30ms
  • Testing and Debugging -> Only unit testing can be done efficiently. For integration testing, debugging still WIP

Connecting the dots

So we have discussed monolith, microservice, container and serverless concepts. Now each one is important and leads to one another. Serverless uses container exhaustively because each function executes in its own container and then destroyed. Serverless functions fit more in microservice architecture. And microservices are evolved from the challenges of the monolith. These are not new concepts except serverless. We’ve been using them in the real world, application programming, system programming knowingly / unknowingly.

Think before decide

As I stated at the beginning that there is no single bullet for every solution. You have to think and evaluate the application requirement, resources available etc before deciding an application architecture.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade