Serverless as a Success, explained from a vendor-free view
Despite the commercial catch-all terminology which serverless is part of, the technology quickly conquers the software development world. The latest Stack Overflow developer survey, finished by 60.000+ engineers, shows that serverless ranks second in “most loved platforms”. Starting from a historical perspective, here’s why it wins the minds of developers.
From tangible servers to the cloud
Already a quite long time ago, companies were accustomed to invest in hardware. They would pay for a server and install it in their storage room. Someone’s task was to plug the cables in. When you had more traffic than the server was able to manage, then you would pay for a larger server. This is what we label as ‘vertical scaling’. Back then, that was actually effective. It was a very straight-forward concept.
Nevertheless companies faced some hurdles. In the first place, there was no elasticity in scaling. You had to forecast the traffic you had to anticipate on, and pay for a high-powered server to manage this. If you reached the maximum capacity, you had to make a large investment in a bigger server. This could mount up to tens of thousands of dollars. As enterprises got highly-developed, they began to add numerous servers and then load-balance traffic across them. This provided a bit of flexibility in scaling but still demanded long-term arrangements.
Roughly ten years ago, the cloud began to emerge. Rather than making large investments in advance, the opportunity arose to rent infrastructures at much more defined levels. You could select multi-sized servers and lease them per hour, rather than stay stuck to your server’s lifecycle. For years this concept was successful, as we also optimized the utilization rate of the servers. Rather than investing in a large server that has inflexible capacity, we can now scale server capacity up or down.
Serverless as the next step in IT infrastructure’s evolution
However applications became more and more advanced. The underlying infrastructure of sophisticated software is hard to manage and to synchronize in case of new changes. The need to build advanced applications faster also increased. Companies wanted to empower devs to easily make improvements on a software project. So, we began to split up those huge monolithic applications in microservices. Microservices are easier to build as they are smaller.
But at the same time, this led to an orchestration challenge. Orchestration is the notion that single services can be scaled, restarted, retrieved, turned on and off and extended or downsized in capacity. And those services can operate on top of a cluster of computing resources.
However, if an application exists of numerous microservices, you have to determine to which extent every service scales. And you can optimize on more than one level. You can not only optimize on infrastructure level, but also on application level (namely, each resource available to every microservice). That is useful in terms of managing utilization, and it helps developers ship applications more rapidly. But it also leads to more complexity. You have more marks where you need to manage scaling. You also have more points where you want to keep sight of application health. That is a responsibility that requires quite some expertise.
A big part of the market — although orchestration is one of its needs and everybody strives to efficiently use its infrastructure — doesn’t see orchestration as business differentiator. Software development teams and companies do not want to invest in a strong orchestration skill set. Just because they don’t really consider orchestration to be a routine that contributes to their business model.
With serverless, software development teams and companies can outsource their need for an orchestration skill set. They let IaaS providers manage the orchestration. Their teams can then focus on building applications and take care of their health atop of this new managed infrastructure model.
Serverless is being embraced by both the business and tech side
Each company, no matter which service it has to deliver, is trying to ship its products faster and faster. Enterprises seek to maximize their pace at any time. They are eager to create more value for the customer, in order to secure more value from new and existing customers.
Simultaneously, companies want to control the costs. It’s not serving their interests to just throw infrastructure at the problem and scale up randomly. They want to be sure they are increasing the utilization of the infrastructure they invest in. And they want to balance efficiency with having in mind that their service might go down. They might also have to deal with other errors, or any other way companies might not succeed in providing their services.
Although beneficial for the business, serverless is a true software development concept.
Programmers deal with many factors over the complete software delivery lifecycle. If you only concentrate on one part, you are already able to maximize your output significantly. And a programmer is able to provide most of his value by writing code. They may not excel at handling the underlying infrastructure. Arranging a network appropriately is a quite different expertise than programming. There are professionals who concentrate on both. Especially in a smaller company it’s hard to create a team that possesses all the skills needed to control the complete lifecycle.
Yet for enterprises, if their organization is empowered to concentrate on developing and delivering products and specialized cloud providers deal with the boilerplate infrastructure, they are also able to gain a competitive edge. The company can focus on creating value, while (a part of) the competition still is managing boilerplate hardware.
Serverless can perfectly provide Functions as a Service (FaaS)
FaaS is a best-practice for developing serverless apps. You can develop a lot of applications in the same way how they are built nowadays. Say, for example, you provide an API endpoint and connect it to a FaaS. On each request to that endpoint, your code would run and give back a response. Seemingly it works very much the same as a conventional infrastructure concept, but there are little differences: the function doesn’t last; it will start and stop. Therefore there’s no state that carries on from one transaction to the other. Yet it pursues the conventional request–response loop that programmers are familiar with.
FaaS also allows the introduction of other software development concepts. One that is getting in favour is the event-driven concept. With this model, you can assign your functions to keep track of particular event triggers and then return a response to them. For example you could keep track of a file being uploaded to an object storage bucket. When this happens, we maybe like to run some code. In this situation you generally see companies running code that transcodes videos or creates thumbnails. When the image is uploaded, our function is notified, creates a thumbnail, and brings that to a different location.
This event-driven concept is promising as you can create a much more distributed experience across company teams. As an example, the marketing team wants to keep track of events that are triggered by the e-commerce site. Somebody adds a product to his cart and then leaves the web store. You could make a function that keeps track of this event and activates another function in response. Maybe this function generates a marketing email that responds to someone leaving the e-store with a stuffed cart.
In this case the code is distributed over multiple servers, while the development concept itself is also distributed. Multiple company teams can rely on and respond to both the code and events. Other teams are maybe managing the technology under the hood.
Originally published at Jexia.