The Nature of Serverless

Rodrigo Estrada
Jul 6 · 5 min read

It is tempting to see disruption in every change when it is easier than admit the superficiality with which concepts and technologies are understood and used. In practice, most changes are the product of a gradual evolution of an iterative nature.

It is difficult for there to be limited people or entities that generate a disruptive concept in a limited period of time.

Cloud is a very broad term that from the point of view of providers usually considers IaaS, PaaS and Serverless. In the context of the present discussion, Cloud is the status quo of its current use, applications and Cloud Native services based on micro-services.

Cloud computing bases its foundations on a long maturation process of IaaS and emerges as a concept when the IaaS is stable enough to take advantage of automation and create a native platform that uses all its advantages. Basically, when building a platform fully supported in IaaS, the location of computational resources and support is irrelevant to the Cloud developer or Cloud administrator, making distributed computing easier. Cloud does not compete with IaaS, it is an abstraction layer that rests on the IaaS. A clear example is the difference between managed services such as AWS RDS, Google Cloud SQL and Azure SQL Database Managed Instance with respect to AWS DynamoDB, Google Firestore or Azure CosmosDB. The first ones are a euphemism for IaaS deployments managed by a provider, while the latter are technologies that have been created from scratch to take advantage of the IaaS, that is, native services in the cloud. As a clarification, all these services have a native version in the cloud such as AWS Aurora, Azure SQL and Google Cloud SQL itself, but they still have many of the restrictions of a managed IaaS because their base technology is not native. A very special exception is Google SQL Spanner that promises to provide real distributed Ansi SQL.

At first, the user of the service should not mind the implementation while the service behaves externally as a Cloud service. However, only a little use shows the difference in aspects such as downtime, maintenance, scalability, billing, etc. This same pattern is valid for applications developed by organizations, regardless of whether an application is in the cloud, if it is not native, its IaaS nature becomes evident quickly.

The Cloud is a relevant improvement but they are still services that most of the time (in computational scale) are waiting to be used. This situation is palpable in the fact that the concern is still to maximize the time that services are up which has as a consequence that the use of computational resources is still a relevant limitation in the design of applications. Auto-scaling is a practice that mitigates this situation, but it is still a workaround, because the availability of a service is still very affected by capacity planning. This model favors the boom of micro-services and the domain of synchronous calls (REST), a model that has worked very well but due to its older technology, generates impedance that prevents the use of the maturation of the Cloud as a platform.

Just as the Cloud arises from the native use of IaaS in a mature stage, Serverless arises from the native use of the Cloud in a mature stage.

The main question that Serverless answers is what happens if instead of worrying about how long the services are up, we worry about how long the services are down. Moving out from where the alert is having a very high uptime, drastically decreases the stress caused by maintenance and urgency when resolving incidents. Under this paradigm, a defect affects the budget, but not the client. The answer to achieve the change of perspective is to recognize that thanks to the maturation of Cloud Computing and the containerization of applications is possible a purer, simpler and more elegant solution using brute force resting all the complexity in the Cloud which rests in the IaaS. The solution consists of something that was unimaginable a few years ago, create resources by request and destroy them at the end of the petition, that is, resources on demand.

Scalability and uptime is no longer relevant as long as the Cloud and the IaaS on which it rests have a critical mass of applications running on it that justifies an efficient resource buffer. This condition occurs in worldwide scale organizations or simply in a public Cloud such as AWS, Google Cloud or Azure. Under an on-demand model, the use of events is paramount and synchronous technologies such as micro-services or REST are not a natural option.

A native Serverless application or service must aspire to asynchronism and events in all its components.

The first serverless services are related to the execution of logic such as AWS Lambda, Google Functions or Azure Functions. In order to have a completely asynchronous environment it is necessary to complement at least with serverless technologies for data storage and for interface. In the case of data storage, Google Firestore is the closest because it is natively built on events and imposes severe restrictions to privilege asynchronism. Azure CosmosDB, in its native use, also allows asynchronism and is an excellent complement to Azure functions. AWS DynamoDB does not have the elegance of the previous ones, but it is a pioneer and has a mature event integration with AWS Lambda which in turn is pioneering and mature. In the case of interfaces, HTTP2 is the answer for the implementation of APIS and currently gRPC is the most mature protocol with support in mobile applications and in the most used browsers.

In the case of data processing such as Machine Learning, aggregations, cleaning, enrichment and analysis, this approach was the most natural long before coining the term serverless.

To exemplify the difference between a Native Serverless application and a Managed Cloud (in simile with a Cloud Native application and a managed IaaS) one can observe the use of containers as a service such as Azure ACI or AWS Fargate. If you create a container that should wait most of the time as a micro-service, in practice it is very similar to using Azure App Services or AWS Elastic Bean if it is used with an orchestrator or a VM if it is not used with an orchestrator . We are facing a Managed Cloud. If the container is created after an event and it is destroyed as soon as the execution ends, we are facing a Serverless Native service.

Finally it is important to understand the difference between orchestration and choreography. When active services such as micro-services are mostly used, orchestration is the dominant technology. The most explicit example in the Cloud is Kubernetes. When passive services are the default technology, such as functions, choreography should be the dominant coordination model. In this aspect is where the evolutionary relationship between Serverless, Cloud and IaaS stands out. If you would like to implement serverless On-Premise you need a platform for functions by events which needs a platform to orchestrate containers which needs a platform to manage computational resources. For example, Kubeless on Kubernetes on OpenStack. Probably on the future with hardware simplification and 5g, will be possible go further and just use choreography and serverless directly over tiny massive devices, Skynet arise!