Five Things You Need to Know About Serverless

by John Chapin

SingleStone
SingleStone
4 min readNov 8, 2018

--

SingleStone launched Reverb to offer hands-on training on the tools and technologies changing the way we work. John Chapin will lead our Serverless workshop on November 20th.

1. Serverless doesn’t require management of Server hosts or Server processes.

Where Serverless goes beyond a traditional Platform as a Service (PaaS) — and most usages of containers — is the removal of the concept of a long-lived ‘server’ component that we define. Instead, we configure capacity — in the case of Backend as a Service (BaaS) — and we define event-driven functions — in the case of Functions as a Service (FaaS) — but there is no long-lived process to manage ourselves in the Serverless model. We no longer have to think about the number of server instances we need for a given component and we can skip defining the type or instance size our application requires to run.

This is why we call it Serverless: We are building server-side software solutions, but the servers themselves are all abstracted. They still exist somewhere behind the scenes, but we don’t have to think about them.

2. Serverless architectures auto-scale and auto-provision based on load.

With traditional server-side architectures we are responsible for provisioning and scaling the resource usage of our applications. But Serverless changes all this — they auto-provision the required resources. In other words, as soon as you start using the service, it figures which resources you require and how much of each resource you need and automatically provisions them for you.

Self auto-scaling and auto-provisioning save time and effort. This is especially helpful when you’re developing a new product or service and need fast time to market. Serverless architectures also often reduce infrastructure costs because this automatic scaling is right sized.

3. The metered cost of Serverless architectures is very precise.

Serverless services are billed on very precise usage metrics. Consider Elastic Compute Cloud (EC2) from Amazon Web Service (AWS) as a comparison. EC2 provides scalable computing capacity and is billed on a per-second basis (whether the EC2 instance is performing any useful work or not). In contrast, Lambda — AWS’s event-driven Serverless computing platform — is priced based on the number of requests you receive per month, the duration of those requests, and the amount of memory the functions are configured for. Requests are billed at $.20 per 1 million requests. Duration is billed per 100 millisecond and is calculated from the time your code begins executing until it returns or otherwise terminates.

Thanks to these precise usage costs and the self auto-provisioning and auto-scaling features mentioned above, Serverless systems are charged more efficiently and reactively than other deployment architectures.

4. You can’t define Services performance capabilities in terms of host size/count.

Since we’re not choosing the number, type or size of servers in our Serverless architecture, how do we define the performance requirements that we need in order to run our Serverless applications?

With AWS Lambda we have one performance dial: RAM. You can’t say what overall speed CPU you want or that you want more cores, or that you want a different amount of local disk storage.

With Azure App Service Plans, there are a few more dials to turn, but that quickly begins to feel like you’re managing servers again. In some ways this is not a new problem (think virtualization and containerization), but it does point to the immaturity of Serverless architectures and we expect configuration controls to expand in the future.

5. Serverless offers implicit high availability.

The term high availability (HA) refers to software’s ability to continue to operate even when one instance of a component fails. We often implement HA using some kind of redundancy. With Serverless, this redundancy is implicit, but it does have its limits. On AWS, a Serverless application based on Lambda has the ability to handle the failure of an entire availability zone, which roughly maps to a datacenter. However, Serverless does not offer implicit Disaster Recovery. We still need to consider disaster scenarios in our architectural planning.

John Chapin is a co-founder of Symphonia, and will be leading a one-day Serverless workshop in Richmond, VA, on November 20th. For more information or to buy tickets, visit Reverb.

This piece was inspired by a five-part blog called Defining Serverless, written by Chapin’s co-founder Mike Roberts.

Their highly regarded talks and workshops are regularly featured at conferences, including the Software Architecture Conference, Velocity, OSCON, QCon, ServerlessConf and AWS re:Invent.

--

--

SingleStone
SingleStone

We’re a technology consulting company. We help businesses keep up with tech so they can keep up with their customers.