Serverless — Principles, Use Cases, Pattern and (more than) Function as a Service
In the world of Infrastructure/Platform/Container/Software as a Service, Function as a Service (FaaS) is increasingly being spoken of. But the concept of FaaS is already an old hat for most cloud-natives among you. FaaS is unfortunately often equated with serverless, which is not quite correct as we will see in the next few minutes. I would like to look with you at what serverless actually consists of, why FaaS ! = serverless is and for which applications serverless is an interesting choice.

FaaS joins the ranks between Backend and Software as a Service. FaaS in general can be understood as the provision of a platform for executing code without having to worry about the infrastructure. Serverless, on the other hand, is an architectural approach that uses several components of backend, function and software as a service.
Serverless & its principles
Serverless means “without a server”, of course this is not quite correct, because you always have servers at the end of the day. It would be more correct to say “I don’t want to worry about servers”, but the name would probably be a little too long. Potes and Nair introduced the “serverless compute manifesto” a few years ago. The “serverless compute manifesto” comprises the following eight points:
- Function are the unit of deployment and scaling.
- No machines, VMs, or containers visible in the programming model.
- Permanent storage lives elsewhere.
- Scales per request; Users cannot over- or under-provision capacity.
- Never pay for idle (no cold servers/containers or their costs).
- Implicitly fault-tolerant because functions can run anywhere.
- BYOC — Bring Your Own Code.
- Metrics and logging are a universal right.
Basically, the manifest defines that serverless is about pure functionality or function, which is available when needed. Everything else (computing power, memory, network) is assumed as given, so that only the functions and their interaction are important in operation. Therefore, in my opinion, the manifesto is more of a “Function as a Service Manifesto”, because Serverless is more than just isolated functions. The principles of Sbarski and Kroonenburg are therefore more suitable for the concrete development of serverless oriented solutions. These are more practical or easier to grasp and communicate. They also describe the overall context of serverless more clearly.
1. Use a compute service to execute code on demand (no servers)
The first principle should be familiar to you because it is also written as “Scaled per call” in the manifest. Basically, it says that the server capacity/computing power is only used when it is needed. This is of course difficult to do with classic servers or VMs, as it can take up to several minutes to start. Almost all cloud operators therefore offer corresponding services. The best known of these are AWS Lambda, Google Cloud Functions or Azure Functions. These services can also be summarized as Function as a Service (FaaS) and basically function according to the principle that only the code that does not contain more than one function is loaded into the service. The cloud platforms offer further integrations around their FaaS solution, e.g. to databases, block storage or messaging systems.
2. Write single-purpose stateless functions
As the term “Function” as a Service makes clear, it is not about using an entire application or extensive code segments. One function serves a purpose: If a file is loaded to a storage, make a database manipulation, move a file to the next service (like video transcoding, image recognition etc.), find the order status in the database…It is also important to make the functions stateless, because as soon as the function is finished with its activity it is “shut down” again or deactivated. Unlike a VM (with persistent storage), it does not retain its memory, which is completely discarded. One question that is often raised here is “How big is a function? There is probably no scientifically validated answer to this, but most of the code ranges between 10 and 100 lines. What one should not misunderstand with this principle is not to outsource each individual process step into a function, even if the clean coders might be found here. This would lead to an unnecessary and unmanageable complexity.
3. Design push-based, event-driven pipelines.
A key to serverless operation is the push-based design of the system. This means that every process step triggers the next one. This is also referred to as events. In contrast to pull-based systems, in which the resources themselves decide to what extent they react to events and thus allocate resources, the individual components or services in the push-based systems must scale according to the requests. Which refers to the core property of serverless. This is difficult to achieve, for example, with a classic message queue system. An Apache Kafka relies on a Public & Subscripe Model, which means that although a push to the Message Broker takes place, the message must be retrieved there. There are of course creative minds that start a function every few seconds or minutes to pull the queue, but this shows a bad system design or misunderstanding of serverless.
4. Create thicker, more powerful front ends
Computers and transmission networks are becoming increasingly powerful. Therefore, frontends can display more than just static content and are able to cover even more extensive parts of the business logic. This development also benefits serverless, so that the frontends speak directly (usually via an API gateway) with the relevant functions. Of course, this does not apply to everything, e.g. for security or data protection reasons. Assuming you have a platform on which payments are made, then the payments should be made in the backend or via a service provider. Using more modern interfaces such as a GraphQL, it is not necessary to run several REST queries to retrieve a reasonably complex dataset or it prevents a web or mobile application from communicating too much with the backend.
5. Embrace third-party services
Another exciting point not addressed in the manifesto is the use of additional services, not only from cloud operators but also from third party providers. The key statement is: You don’t have to worry about running any infrastructure. Only the functionality and some network management remains at the end. The cloud operators offer a wide range of services that can be used from databases and storage, messaging and serverless hosting options to machine learning, transcoding, translating services. On the market, however, very strong special solutions are also developing, especially in identity and access management, there are exciting products, for example from Auth0, or Firebase which is almost more than just a backend as a service and provides authentication and other functionalities.
The use and combination of independent services is a core element of serverless. This makes very rapid development and less complex operation possible in the first place.
Function as a Services != Serverless
As I showed before, there is a difference between FaaS and Serverless. When talking about FaaS, terms like AWS Lambda, Google Cloud Functions, OpenWhisk and OpenFaas are often used. In short, FaaS is the platform that allows me to run function-oriented code on a system and be addressed by other services.
Serverless, on the other hand, is an architectural approach or several principles for implementing solutions in which own management and operation play a subordinate role. Serverless, however, accesses “managed” services such as a Firebase or DynamoDB. Mike Roberts once summed this up very well.
„Serverless architectures are application designs that incorporate third-party “Backend as a Service” (BaaS) services, and/or that include custom code run in managed, ephemeral containers on a “Functions as a Service” (FaaS) platform.“
The following picture shows again very roughly the context of serverless. FaaS is a component for the realization of serverless, as many other products/services managed by the cloud provider can be. However, the Virtual Machines (EC2) which IaaS are not part of serverless, because you have to take care of the operating system, installed system components (web server, database engines etc.) and updates.

Use cases for serverless
We’ve looked at what FaaS and serverless are and what the differences are. We now know the context of serverless and what is possible within it. So the big question remains: what do we do with it now?
API proxy for legacy systems
Especially in old and large companies, IT has to do with cumbersome legacy systems. These often offer massive APIs and are almost never touched. Serverless in combination with an API gateway can create an abstraction layer and provide a simple REST API to the outside world. The complexity between REST and old API is accommodated in the function. This means that even old systems can be made more easily accessible to newer systems. The functions often transfer the data from one data format to another, adapt the transcoding or add further information to the request.

The data manipulation goes along with the previous example, therefore only briefly taken up here. Serverless is excellent for moving, manipulating, transcoding and aggregating data (JSON, XML, images, etc.). From an architectural point of view, this can be sorted into “Compute as Glue”, i.e. as an adhesive between several system components. In some cases ETL (extract, transform, load) can already be spoken of.
Serverless backend
If you use serverless completely you talk about serverless backend or “compute as a backend” architecture, which means: Using all serverless principles you build an application where you don’t have to care about servers but only about the pure functionality. The services within a cloud provider are marked with yellow text, while those with the black text are other services. The application is roughly designed as follows.
- When calling a URL, the static website from the S3 Bucket is first delivered via Cloudfront.
- The website authenticates and authorizes the user through Auth0, a service provider for authentication.
- Data is then loaded from the real-time database Firebase and displayed to the user.
- If the latter adjusts his profile picture, for example, this is transferred via the API gateway to a lambda function that stores the picture in the S3 (left lambda).
- Data and information adaptation are written directly from the frontend to the Firebase.
- However, if the user initiates a payment process, for example, this is processed via a lambda function and the new status is written back to the Firebase like a subscription.
- All data is fed into a queue (SQS) via a lambda and imported by a lambda function in Redshift as a DWH solution.
- Another lambda function accesses metrics, logs and meta data and writes them in Elastic Cloud for visualization, alerting and monitoring of the application.

This is a quite simple example which can be used for many use cases. From this point on, the complexity increases with every requirement. No matter how big the system gets in the end, you only pay for what you use. While at the same time it can scale almost unlimited.
Finally, I would like to introduce three other use cases:
- On the one hand, most FaaS provider functions can be scheduled to run regularly or at certain times. This could be used, for example, to clean up databases, trigger settlements or transfer data to a long-term memory.
- With the increase in chatbots and language assistants, the relevance of FaaS has also increased even more, as individual functions can be addressed easily and flexible. This allows the developers to implement their own cost-effective developments.
- Even if REST interfaces have only become really fashionable in the last 3–4 years, there is now an even more exciting way to pull or manipulate data from the backend: GraphQL. GraphQL is a very simple query language that can be executed on a server or serverless and addresses one or more data sources. The interesting thing is that for complex queries only one query has to be sent (which is also very simple), while in REST you have to combine several queries to get the right result.
Nevertheless serverless also has some tradeoffs. Systems that are continuously under high load, need a lot of computing power or where every millisecond counts are not happy in a serverless environment.
In conclusion, I can only say that serverless offers a great opportunity to build fast, affordable and flexible systems out of nothing, which already know no scaling limits by default. Especially if you have new ideas you can easily realize and test them. Even after “Lift and Shift” migrations, i.e. the simple transfer of OnPremise hosted applications to a cloud provider, serverless can bridge the gap between “old” and new worlds. Also exciting is the development in the open source environment, where some interesting projects are already running that help with the creation and administration of serverless applications. Like Serverless Framework, Zeit Now or Apex Up. Have a look, have a great day!
P.S. I originally posted it in german at IT-Talents: https://www.it-talents.de/blog/gastbeitraege/serverless-prinzipien-use-cases-und-mehr-als-nur-function-as-a-service
P.P.S. I created this pictures with https://cloudcraft.co/
