The Trade-offs with Serverless Functions
In the world of Serverless, Function as a Service — or FaaS for short — such as AWS Lambda, Azure Functions, GCP Functions, Cloudflare Workers, amongst others have become an optimal and inexpensive way to create highly-scalable and high-performing backend services. At SSENSE, we started to develop many of our new services using AWS Lambdas. In using the Serverless Framework as our main tool, we are able to rapidly and easily develop and deploy our services.
Since FaaS is a relatively new development paradigm, it comes with a set of trade-offs that every developer should take into account when developing these kinds of services. In this article, I want to mention those that I consider the most important.
Cold Start
One of the main trade-offs in the Serverless world is the infamous “cold start problem” — which our previous SSENSE-TECH article examines further. It can be described as “the initialization of the function container before execution”. Now you may ask, how does this work? When does this happen? Why should I care?
A cold start will happen when your function is considered to be in a “sleeping state”. This means that when your function has not been executed in the last little while, there is reserved infrastructure behind it. This is done to save you money by only spinning up infrastructure when you need it.
When your serverless code is executed again but the cold start already happened, your code is considered warm and will run almost immediately. This means that the first execution will always take more time than the others and will cause a big bottleneck if you have concurrent execution.
A cold start is inevitable. It is one of the most common and, in my opinion, one of the most important problems in the Serverless world. What makes it even more interesting, is that a lot of your development choices can heavily impact the cold start of your application (e.g. the size of your code, the number of dependencies, the environment of your choice, etc.). If you want to improve the cold start of your function here are a couple of suggestions!
Keep Your Serverless Application Warm
There’s a couple of patterns proposed by the community and Serverless advocates that will help us to keep our Lambda warm. If you are using Serverless Framework you can use the Serverless Plugin [serverless-plugin-warmup](<https://github.com/FidelLimited/serverless-plugin-warmup>) to easily keep your Lambda warm.
Keep Your Code and Dependencies as Small as Possible
Always minimize the number of dependencies of your application and, if you are in a script based environment like NodeJS or Python, try to minify and tree-shake your code as much as possible to reduce the file size of your application files.
Your Development Environment Matters A LOT
Let’s take an example with AWS Lambda: JavaScript (NodeJS) 🆚 golang
JavaScript is a scripting language where we do not “compile” our code before shipping it to production. Because of this, we need to take into account some file optimizations, like Tree-Shaking to remove unused files and dependencies in our code base. We want to use uglification to reduce the number of white spaces and minify the variable names in order to reduce the file size of our code base. Packing with WebPack to optimally bundle our code to reduce the file size even further and the dependencies used will also have a positive impact. All of these steps must happen BEFORE shipping our code into production. While this may delay the lead time, if it’s not well optimized you will gain by reducing the cold start.
golang is a programming language developed by Google which is compiled and very similar to C. Thanks to the amazing effort of the golang development team and community, optimizations for your code and dependencies will almost always happen at development time. We just compile our code, deliver it using either Serverless Framework or SAM CLI, and you are pretty much ready to go!
As you can see, both languages have their unique set of requirements to reduce the cold start problem.
Keeping Connections Alive is WAY Different
Imagine that you need to connect to resources like Database Clients or Feature Flag Clients where you create a connection every time. You create a code that will look like this:
Imagine that you need to connect to resources like Database Clients or Feature Flag Clients where you create a connection every time. You create a code that will look like this:
You can see that every time this function is executed, it will connect to a MySQL DB and then query something for us. Seems straightforward and looks like there’s no problem, but in FaaS we need to think on a major scale.
Imagine that this same code runs for 1,000,000 users, per second. This will absolutely crash our MySQL DB since we create a DB connection per user.
To solve this problem, thankfully, AWS Lambda provides a global state that will be executed just once and keep the state while the Lambda is alive. For DB connections, we can create a connection pool and re-use that same connection for all the concurrent Lambdas. Our code should now look like:
If you are using another kind of client like Feature Flags, the same principle applies!
The golden rule is, take advantage of the global state of your Lambdas and initialize your connections once.
You Are Locked to the Vendor
Every FaaS provider provides a different way to create functions within their infrastructure, with a different deployment pipeline with different signatures. These key differences make it very hard to create vendor decoupled abstractions. Solutions like Serverless Framework, Terraform, and Pulumi provide a certain level of abstraction for this, but there is still some coupling. So for now and the near future, if you develop a function for AWS, you are locked to it.
Example using Serverless Framework: Imagine that you have a function made in GCP Functions and you need to migrate it to AWS Lambda.
You may wonder if serverless can translate this automatically. The answer, sadly, is no. You need to translate your serverless application manually since the signature of the function is very different on AWS Lambda:
Getting the Logs Out of the Cloud Vendor Is Quite a Pain
In AWS Lambda, the way that you get logs from your serverless application is through CloudWatch, but in my experience, reading logs from there has been so bad and so painful that I had to recur to other log aggregation services like DataDog and even with that, getting the logs out of AWS is quite a process and depends a lot of your log aggregation service.
Conclusions
FaaS is an interesting world with a whole slew of new possibilities to create highly scalable, concurrent, and highly available applications. Like everything, you solve a set of problems which will come with another set of problems and we need to learn how to overcome most of these. I hope you find this article useful!
You can find more and detailed info on drawbacks in Martin Fowler’s Serverless Architectures explanation.
Editorial reviews by Deanna Chow, Liela Touré, & Mikhail Levkovsky.
Want to work with us? Click here to see all open positions at SSENSE!