It’s just someone else’s server!

From servers to functions, the serverless story

Despite the name, serverless does not actually involve running code without a server but it is rather an architectural style where the system owner does not own the infrastructure. In other words, it’s just someone else’s server!

Solutions using this architecture style, typically use one of two techniques:

  • Backend-as-a-Service (BaaS), where third-party remote application services are tightly integrated into the front-end of the applications, or;
  • Functions-as-a-Service (FaaS), where server-side code is hosted in ephemeral function instances rather than long-running components.

Typically, people referring serverless refer to the later (FaaS), which represents an evolution of how computing is consumed, not the underlying compute itself, based on a few key principles:

  • Complete abstraction of servers for the developer — no server management, nor software or run-time to install, maintain, or
    administer.
  • Billing based on consumption and execution — there is no idle capacity and no need to pre-provision or over-provision server capacity. There is no charge when code isn’t running.
  • Services that are event-driven and instantaneously scalable — Applications can automatically scale by adjusting its capacity through toggling the units of consumption (e.g. throughput, memory) — vertical scaling — rather than units of individual servers — horizontal scaling.
  • Applications have built-in availability and fault tolerance — the developer does not need to architect for these capabilities because the services running the application provide them by default.

Amazon Web Services, Microsoft Azure, and Google Cloud all provide this capability, with similar functionalities: AWS Lambda, Google Cloud Functions, Microsoft Azure Functions, IBM Cloud Functions.


As we move from Platform-as-a-Service (PaaS) down to Function-as-a-Service (FaaS), it’s key to understand the differences and consequences.

Different cloud computing types

PaaS greatly simplifies the deployment of applications, allowing the developer to focus on the application while the ‘cloud’ provider worries about how to deploy the servers to run it. Most PaaS hosting options can auto-scale the number of servers to handle workloads and downsize to save money during times of low usage.

FaaS, on the other side, provides the ability to deploy a single function or part of an application. FaaS is designed to potentially be a serverless architecture, although most providers allow you to dedicate resources to a function.

When deployed as PaaS, an application is typically running on at least one server at all times. With FaaS, it may not be running at all until the function needs to be executed. It starts the function within a few milliseconds and then shuts it down — state management becomes key.

Benefits

Scaling costs and optimization

Pay as you go is one of the key benefits of using FaaS. On the basic infrastructural side, you only pay for the compute that you need which, depending on the traffic scale and shape, may be a huge economic win.

An interesting aspect of FaaS is that it ties performance directly to financial benefits: any performance optimizations made in the application will not only increase the speed of the application but will also have a direct and immediate link to reduction in operational costs, subject to the granularity of the vendor’s charging scheme.

Time to market and experimentation

FaaS enables a short time to market by, not only enabling rapid iteration of stable projects, but also allowing for new experiments with low friction and minimal cost. As teams, products and solutions become increasingly geared around lean and agile processes, teams want to continually try new things and rapidly update existing systems.

Reduced packaging and deployment complexity

While API gateways are not simple yet, the act of packaging and deploying a FaaS function is really simple compared to deploying an entire server. All it’s required is compiling and zip’ing/jar’ing the code, and then uploading it — or even develop it directly on the increasingly advanced cloud IDE’s some providers support. No puppet/chef, no start/stop shell scripts, no decisions about whether to deploy one or many containers on a machine.

Outsource operations

Serverless is at its most simple perspective an outsourcing solution. It allows to pay someone to manage servers, databases and even application logic that otherwise it would be managed. This may lead in some cases of fully serverless solutions to zero system administration.

Drawbacks

Computing constrains

FaaS functions have significant restrictions when it comes to local (machine/instance bound) state. In short, the developer should assume that for any given invocation of a function none of the in-process or host state created will be available to any subsequent invocation. This includes state in RAM and state you may write to local disk. In other words, FaaS functions are stateless.

FaaS functions are typically limited in how long each invocation is allowed to run. At present, AWS Lambda functions are not allowed to run for longer than 5 minutes and if they do they will be terminated. With FaaS there’s no bending the rules.

Vendor lock-in

While conceptually similar, many FaaS features are implemented differently across vendors. Switching vendors will almost certainly cause to adjust the code (e.g. integrate with a different FaaS interface), update operational tools (deployment, monitoring, etc.) and may even require an architecture change.

Also, as with any outsourcing strategy, it means giving up control of some of the system to a 3rd-party vendor. Such lack of control may manifest as system downtime, unexpected limits, cost changes, loss of functionality, forced API upgrades, and more.

Monitoring and debugging

With FaaS, the developer is limited to the monitoring and debugging capabilities offered by the vendor which, is most cases are still quite basic — this is likely to be one of the areas to see more evolution in the next months.

Open API’s and the ability to integrate third-party services will be key to improve deployment/application bundling, configuration, monitoring/logging, and debugging of FaaS applications.

Notice that using FaaS doesn’t mean no operations. It might mean ‘no internal system administration’ in some cases but operations is more than server administration. It also means monitoring, deployment, security, networking and often some amount of production debugging and system scaling.

Multi-tenancy

Multi-tenancy refers to a scenario where multiple running instances of software for different customers (or tenants) are run on the same machine, and possibly within the same hosting application, enabling economy of scale benefits.

While service vendors try to make each customer feel as they are the only one in the system, no-one’s perfect and sometimes multi-tenant solutions may have problems with security (one customer being able to see another’s data), robustness (an error in one customer’s software causing a failure in a different customer’s software) and performance (a high load customer causing another to slow down).

Conclusion

Virtualization made it easier to manage compute resources through the abstraction of the hypervisor, as it provided better isolation and encapsulation of an operating system. Containers then took this to the next level by providing isolation and encapsulation of an application on top of an operating system. FaaS platforms extend this concept, leveraging containers for encapsulation and isolation, but allowing functions within an application to be decomposed through the introduction of a low latency message bus.

All design is about trade-offs. There are some distinct advantages to applications built in this style and some problems too — review your solution and find if it fits your architecture and design.


Note on Security

While adopting a cloud provider raises relevant security concerns about potential security exploits of a large scale ecosystem, real-world experience tells me that companies have traditionally minimized the importance of security on their own on-premises solutions. Therefore, adopting a cloud solution means, many times, the adoption of new, state-of-art security technology that effectively improves the overall system. Nevertheless, this is a complex topic that deserves a longer discussion and therefore, I’ve consciously decided not to deep dive into this analysis.

The views and opinions expressed here are those of the author and do not reflect the official policy or position of any agency, organisation, employer or company.

Like what you read? Give Nuno Costa a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.