In this article, I would like to compare two of the existing frameworks for deployment of serverless functions on current cloud technologies.
So what does serverless mean? The term mainly refers to the serverless functions, and what does that mean? Assume that you want to have a very simple website that reports the current time. To do that, you would traditionally set up a server, either a physical or in the cloud, that will report the time for every client request. This is a very painful and costly approach for this application, as you would have to pay a constant amount of money for provisioning those resources, regardless of the usage pattern, let alone the hassle of managing the provisioned infrastructure.
This is where serverless becomes useful and can save you lots of cash. Instead of having a dedicated server for your application, you can run your application on serverless functions provided by cloud providers. This means, your function, which has the role of reporting the time of the day, will run on a random machine on the cloud. You will no longer be responsible for managing the dedicated servers.
This approach allows you to pay for what you only use which is going to be way cheaper than an under-utilized provisioned server. Also with a serverless application, you can scale seamlessly from a couple of requests per day to tens of thousands of requests per minute, and subsequently you would only pay for the execution time you use. You can see some of the main features of serverless applications in here:
Many cloud providers provide serverless computation including Amazon Web Services (AWS), Google Cloud Products (GCP), Microsoft Azure, etc. Serverless functions are provided by AWS Lambda, Cloud Functions on GCP, and Azure Functions.
Building a proper serverless application that can solve a real-world non-trivial problem can sometimes be tricky. Doing so in a naive approach would be to use the providers’ console or command line interface (CLI) which gets more challenging as the project gets more complex.
The Serverless Framework and Stackery both make the deployment of serverless application into the cloud much easier. After setting up the environment and writing the code on your local machine, you can simply deploy your solution to the cloud using a simple ‘serverless deploy’ or ‘stackery deploy’ command. This command will set up any functions, resources, and permissions required to run your logic.
The frameworks come really handy when you require some extra library in your function such as requests or numpy. They make your life much easier by allowing you to simply put the required packages in the requirements.txt file, which would otherwise be a bit more complex to add the dependencies to your functions using Lambda Layers.
Both of the framworks work in a quite similar way with only a few differences, and they are both very well-documented therefore I am not going to bore you with how you can use them. Under the hood, they both translate your solution to an AWS CloudFormation Templates which will then be used to prepare (and provision) the requested resources for running your function.
So, what are the differences between Stackery and Serverless Framework?
Stackery allows you to design your architecture in a GUI which will then be translated to a YAML file. The YAML file is the input to AWS Serverless Application Model (SAM) that translates that file to a CloudFormation Template. The CloudFormation Template will then make a stack in CloudFormation that includes all your requested resources and permissions. On the other hand, the Serverless Framework requires you to write the YAML files manually using a text editor which will be directly translated to a CloudFormation Template.
Another important difference is that, the Serverless Framework supports 8 cloud providers including AWS, GCP, and Azure, while Stackery only supports AWS and it is based on AWS SAM.
Additionally, Stackery allows you to visually observe your architecture which can be convenient for some people/teams to work with a GUI rather than directly hand-coding the yaml files. However, this convenience comes with some limitations. While both frameworks support the most common AWS services and components, there exists some features that are only supported in Serverless Framework. As an example, there are several Lambda event triggers such as IoT and Alexa Skills that are only supported by the Serverless Framework at the time of this writing. But I think with a bit of hack, one should be able to use these in Stackery.
Both of these frameworks are really effective for serverless deployment of applications, while each has its pros and cons.
In my personal opinion, Stackery is useful when you want to setup an agile project, don’t want to be bothered with writing the YAML files, and you would like to have a graphical view of your architecture for when you come back to the project after a while. Also, the GUI might be particularly useful when you have a complex architecture. On the other hand, if you are not afraid of writing YAML files, the Serverless Framework can give greater flexibility with its broader range of supporting providers and functionalities.
In summary, if you are almost sure that your project might never leave AWS, Stackery might be the preferred option for you. Whilst, if you anticipate that the project might evolve to larger scales and you do not want to be limited by providers and/or functionalities then the Serverless Framework might be better for you.