Why serverless makes sense
A continuous transformative IT culture plays a crucial role in factoring how new arriving technical disruptors are leveraged and incorporated into any business. As we undertake current transformations, it’s important not to acquire tunnel vision within our existing efforts, but also to keep a close view of our periphery for the other disruptors that may become as important. Serverless may be one of those arriving disruptors that needs our undivided attention and soon.
For example, lets think about another recent technical disruption many of us are currently immersed with which came to us in the way of containers. Incorporating a container strategy helps improve the management and runtime of your enterprise applications and adds significant value to many business use cases. Containers are now a proven cornerstone in many IT digital transformation efforts that offer many opportunities to achieve significant gains across the enterprise. Containers provide us the opportunity to manage and monitor what is often very important to our core business, our application code and the data generated and processed by our code at a very deep, fine grained amount of detail. Also implementing a container strategy as part of a digital transformation affords us flexibility to establish the pace we wish to transform and doesn’t require forklifts to implement.
More choice, more complexities
With all of the potential brought forward for containerizing your enterprise applications and as adoption of container solutions quickly ramp up, comes a massive number of solutions and providers who are tooling up their offerings to include a container flavoring of their own; thus, making the calculus of successfully executing containerization efforts less than trivial to achieve and more difficult to plan. Recently, yet another announcement for yet another container runtime, this one called Container Runtime Interface, or CRI, inserts another set of opinions than that of the originator and creator of this disruptive technology, Docker. Of course we should expect with any new technology there are going to be bugs, but since Docker obliged us with the ingenuity of the first publicly supported Docker engine in 2014, many opposing and competing opinions also have arrived designed to save market share and compete with the tool sets, APIs, and opinions that Docker introduced to us. As such several competing container runtime engines have been brought forward such as Rkt, Clear, and now CRI-O. These choices each come with their own set of supporting opinions and philosophies; but none the less, deciding your choice for the right container runtime engine for your digital transformation efforts requires laser precision and lots of insight.
Unfortunately, the decision matrix for implementing a full end to end container platform does not get any clearer after choosing a container runtime engine, as you will also require infrastructure that provides all of the services, security, and orchestration capabilities to run and manage your applications containers in a highly controlled manner. Putting all of the platform components together to reach a complete operational readiness state on a cluster of host OS’s to effectively run your containerized applications is not a trivial task.
Opportunities for improvement
A review of outcomes and assessments of early containerization efforts establish that further normalizations are still clearly needed to drive widespread adoption and acceptance so that the businesses can clearly delineate the benefits quickly and easily. The complexities and inflexibility of our infrastructure and application legacies are often the gears that slow the pace of desired timelines and outcomes; thus, transformation essentially grinds to a crawl. One short cut that is a commonly utilized is to retrofit current legacy applications by simply adding containers as a wrapper around running legacy application code. While this might be an effective way to introduce key concepts of running containers into an existing environment, this kind of effort falls well short of the many additional gains and possibilities that are available when correctly running your applications in containers.
Its also perhaps understated our most established and accepted infrastructure legacies serves us much of the inefficiencies and complexity that block us from successful transformation. We continue to accept our infrastructure and architectures as they have existed in the past unquestioned, which are typically oversized running host instances with accompanying full sized OS and images. However we have seen during the growing surge in edge computing that informs we can run code much more efficiently, and subsequently each service doesn’t require a massive amount of silicon or OS code. We are seeing efficient devices with right sized OS’s and kernels that run code very efficiently for tv’s, car’s, or cell phone’s. We might start asking then could we not run our business systems in a similar fashion?
Also within each instance of our infrastructure today, we run a large amount of code that is duplicated many times and not even necessary to run applications (i.e. think of things such as unused device drivers and associated libraries, or ssh, etc.). Running unnecessary code repeatedly on several instances is not only costlier, but also increases the security exposures the business must defend and pay for. All of these inefficiencies add up to a much larger bill, much more complexity, a slowdown in delivery and run time inefficiencies, and a much heavier security burden to manage.
Thankfully, most successful digital transformations are iterative and on going efforts that help us identify and remediate complexities and technical debt that slows down execution and continually finds ways to drive out inefficiencies and waste, all while improving performance and scale. How much infrastructure businesses continue to traditionally pay for versus paying for what they need matters, and is an area where technically driven digital transformations can certainly improve and add directly to the business bottom line. Traditional infrastructure identifiers and metrics such as brand or location of the hardware or the name of the OS are simply irrelevant to most businesses core focus. Ideally what the enterprise should strive towards is to pay for compute resource that provides only what is needed to run my application code, regardless of location or vendor.
Serverless architectures enable many paths to achieve such infrastructure transformations. We can now provide much more efficient mechanisms for requesting and consuming compute/network/memory resources. Serverless does so by breaking away from the traditional legacy 24x7 consumption approach of consuming entire bare metal or virtual machines with large running OS images. Though serverless still has servers running all of the time, the configuration, operation, and consumption of those servers has evolved to a much simpler and consumer friendly approach. Here are a few examples of some of the more obvious improvements:
- Serverless provides a way to run at a process level, in parallel with the same way containers allow us to manage, run and monitor our applications at the process level. They arguably go hand in hand with each other.
- A serverless platform is always on and in an operationally ready state to accept requests to run code. Consumption is still metered, but you’re only billed for compute resources used to run your specific code, thus drastically reducing the total meter spin time from days and weeks to minutes or seconds.
- Serverless offloads admin time and costs associated with the many services and complexities traditional hosting requires; such as the need to license and manage the OS, images, and tools necessary for maintaining configurations and patch management.
- Serverless massively improves ones overall security posture. Security is vastly improved by limiting attacks to much shorter running process times and by reducing the amount of exploitable code being run at any point in time.
- Scale and performance are built in and automatic for serverless. Increases in concurrent process loads for your applications to support peaks in activity and usage are natively built in and do not have to be programmatically requested.
- Remove large amounts of effort and complexities involved with repeatably orchestrating your infrastructure as a platform. Think similarly of efforts that would remove a large part of CD from CICD build pipelines as being an analogous comparison.
So, what are the hurdles
The most common obstacle is the reality that most code today is not written in a manner that works well within serverless platforms. Our code traditionally has been written to run as a daemon process all of the time. We have perhaps even gone through extensive efforts to keep the processes running 24x7; whereas serverless is designed to be a very short lived run time before expiration. Serverless running code does not make liberal use of volatile memory for storing details into arrays or strings so that other threads can consume, but instead works to return results to client initiated requests by way of protocols such as REST via APIs. There are many other architectural differences and examples we could cover, way to many details to discuss in one blog post, but in any case, this discussion would likely to steer us towards a mindset that rewriting code is probably the only viable solution to move forward with any serverless effort.
Ironically there are also similar debates that occur when considering to undergo efforts to containerize your enterprise applications. A established best practice for containers , is to develop your code into a more micro-serviced based architecture approach and not have all of your code munged into one giant image.
As we have seen with a push back against re-writing code to support container initiatives and perhaps gone the way of wrapping our legacy into one large image, its likely we would experience the same vigorous arguments against any similar efforts to rewrite our code for serverless efforts. Investing in upgrading your application code and data capabilities to become more container and serverless architecture aware is seen as budget adverse, but couldn’t be a more incorrect assessment. The alternative for not investing in your IP and code and continuing to divert potential investment towards maintaining and continuing to expand our existing hosting legacies is a significant lost opportunity of cost savings, efficiency gains, improved security and not investing correctly towards core business needs.
In almost all business cases, your code and data is most important. The infrastructure, while still required to run your applications, should be regarded as a commodity, very comparable to the light or gas bill, and not directly tied to your business success. Businesses generally don’t care where the electricity or gas was generated or how it is delivered. They only get charged for what they consume as a result of doing business and they don’t have to pay for the end to end management of the service from its origins to consumption. Regardless of the choices made for your transformation efforts, all will require investments to support transformations. Realizing the strong logical case to invest in your IP, the code and data, and not in commoditized utility based services to fully enable your digital transformations should get serious consideration.