Serverless computing, what problems does it solve?
According to a survey carried out by Rightscale.com, serverless computing is extremely popular with many developers adopting this approach. Still, there is not really a clear definition of what serverless is. This is partly because it overlaps two different areas.
First of all, serverless can be viewed as a backend-as-a-service, which aims to replace your custom server-side logic with off-the-shelf hosted services. These include features like data storage, authentication and API management.
Serverless can also be referred to as event triggered, stateless containers hosting applications or parts of applications. These are orchestrated by a highly performant platform that knows when to scale up and down.
The applications within a serverless environment are built by combining a set of microservices that interact with one another — rather than having a monolithic application hosted on a server.
Nowadays many software developers choose serverless architectures because they want to focus only on what is important — finding great solutions to complex problems. Here are some of the reasons why many software developers move to a serverless architecture:
“I just want to write my business logic”
Let’s say you have some data and you want to do something to it so that it gives a valuable output. That something we call business logic, and is at heart of every application. In the old days, you had to take care of a lot of moving parts before you could get to actually building that heart. To make matters worse, many of these tasks were generic and repetitive, consisting out of boilerplate code.
That’s why a new way of doing things had to be invented. Serverless is the solution, relying on the fact that many of these repetitive tasks can be abstracted away.
“I don’t want to run things myself”
If you host an application yourself, someone needs to be maintaining your servers. And, in case there’s a hardware malfunction, your app will go offline. I think I don’t need to mention what will be the effects on customer trust when a situation like this occurs.
“When my code is not running, I don’t want to pay for it.”
In the serverless world, the platform is always aware if there’s work to do for your application. In case there is, it fires up your code and shuts it down afterwards, making it very cost-efficient.
“I want my application to scale as needed and don’t want to think more about it”
As your application matures and starts to attract more traffic, scalability problems can arise. When this happens, you will need to invest in material and people just to stay ahead of these kinds of problems. As you can see, this is not an ideal situation.
Serverless platforms eliminate this problem. By default, such a platform is designed to know when it needs to scale up and down your application.
At Jexia we are building a serverless platform, where ease of use, scalability and high availability are key elements. Let’s dive just a touch deeper into the topic of scalability. (still at a very high level, though)
As we already mentioned, in a serverless environment scalability is important. Okay, but what exactly is scalability? And how do we apply it to our development architecture, when we are building an application?
There is a lot to say when it comes to scalability. Let’s start with the most generic definition and take it from there.
The definition that I think covers most of it is this:
The ability of something (a system) to adapt to increased demand.
Pure and simple, this means your system should be able to handle a certain amount of traffic, and when that amount of traffic grows, your system should be able to adapt to the changes instead of breaking down.
In this article we cannot cover all aspects of scalability, so we will stick to a select few. We will clarify the difference between vertical and horizontal scaling and give some examples of both.
Vertical versus horizontal scaling
Some time ago, I heard a nice example that visualizes the difference between vertical and horizontal scaling very clearly.
Imagine you go to a party and 20 other friends want to join you. Five people can go into your family sedan, but in order to take the rest, you would have to buy a bigger car. With 20 people you would have to buy a bus to have them fit. In a way, this is vertical scaling — you buy a bigger vehicle to handle a heavier load.
Vertical scaling is also known as scaling up. When your service starts receiving more and more traffic and your servers can’t handle it anymore, you can upgrade them with more RAM or CPU. This is a simple and fast solution, but one that is likely to quickly reach its limits.
Now imagine you divide your 20 friends over 4 cars with each 5 people. This is what we mean by horizontal scaling. Horizontal scaling is known as scaling out. This is far more flexible, because as traffic grows, you can simply add more servers. You can spread the traffic among the servers using load balancers and handle the increase in server load. When we are talking in terms of databases, horizontal scaling is based on partitioning of the data. Each node only contains part of the data.
Imagine you have an app server and a DB server both running on the same machine.
As traffic grows you can only adapt by adding more RAM or CPU. Aside from the fact there’s only so much performance you can squeeze out from a single machine, there’s another issue.
That other disadvantage is that when the app server is down, your DB server will go down as well — they are dependent on each other.
The load balancer distributes the workload over the machines.
In the picture you see multiple servers in a cluster. If one of the servers fail, the other one takes over. But what if the load balancer fails? There are still too many dependencies in this construction.
Load balanced app server cluster
If one load balancer fails, the other one can take over and spread the requests over the servers. If one server fails, there are still two servers left to handle the requests. To make the load balancers persistent they will get a virtual IP. If one of the load balancers for any reason goes down, the virtual IP will be automatically configured to the second or third load balancer. Often software like keepalived and the VRRP protocol are used to set up this construction.
Scale up or Scale out?
Both vertical and horizontal scaling have their pro’s and con’s. Roughly spoken vertical scaling is a more cost effective way to increase performance and extend the lifecycle of existing hardware. This is partly because overall performance per watt is higher if you’re running fewer servers. Furthermore, CPU and RAM upgrades are easy to install and implement making them also a time efficient way to scale.
The biggest disadvantage of vertical scaling is that it is never a long term solution. Servers can only be upgraded to their performance limits.
Horizontal scaling, on the other hand, can be endless. Hardware expenses, power and cooling costs, as well as networking equipment costs will be higher in this scenario. In the end though you will have a lasting and scalable architecture.