A Serverless way to serve more

Matías Comercio
Wolox
7 min readAug 11, 2017

--

At Wolox we love innovation. That is why about two years ago we started using the Lambda service from Amazon Web Services. Since this technology was radically new at that moment, we had to build our own tool to manage our Lambda’s repository infrastructure: local test environment, automation scripts for managing the bootstrap, creation, and deployment of lambda functions and handling environmental variables for different stages are its core features.

Since then, we have used it among several projects, with countless benefits.

However, during these past 2 years, as we grew as a company — building technical & unique solutions for our customers in several countries — a fantastic tool was being developed by an open-source community — the Serverless Framework.

We have kept an eye on its development since its early stages, and it was not until the last few months that we believe this tool is prepared to be our trusted solution for managing our serverless resources.

We will not go into detail on topics like ‘What is serverless?’ as we know the web has plenty of well-written posts and documentation references that greatly explain the main idea behind this concept — for us, serverless is an outstanding infrastructure solution We will be covering what we believe are the main features of the Serverless Framework and how we can use them to get the best of it.

Before continuing, it’s worth mentioning that all links to the serverless documentation reference the AWS section. You can later refer to the provider’s documentation you may need.

Serverless Framework

Serverless CLI is available as a node dependency — through npm package manager. It consists of a group of Plugins that are delivered by default with the framework and provide command-line tools to manage your serverless workflow. The framework itself allows you to create several Functions and group them together in a related Service, with the aim of attaching related behavior altogether — i.e. providing a unit of organization — and a central point for managing all their related resources through the awesome ‘serverless.yml’ configuration file. Still maintaining the independence among the service’s functions — e.g.: by granular configuration.

Main features

Plugins

We could not have explained it better than the framework developers themselves, make sure to read it directly on their documentation.

Configuration file

One of the greatest features of this framework is the centralized and powerful configuration file, which allows flexible configuration using Serverless Variables.

Each service has its own configuration file serverless.yml — with plenty of defaults -, whose main responsibilities are:

  • Declare a Serverless service
  • Define the provider’s characteristics — among them:
  • Which provider the service will be deployed to
  • Which runtime should be used
  • Define one or more functions in the service, each with its own configuration extending or overriding the service configuration
  • Define events that trigger each function to execute. E.g.: HTTP requests, schedules, and provider’s specific events (e.g. for AWS: S3, Alexa Skills, SNS, etc.)
  • Allow events listed in the events section to automatically create the resources required for the event upon deployment
  • Define a set of resources required by the functions in this service
  • Define any custom plugins to be used

Any further service configuration will be done in this file, so we highly recommend reading the docs here.

Documentation & Open community

Crucial facts that push anyone to use a new technology are how easy it is to find instructions on how things (should) work and how feasible it is to find help to solve problems when using this technology. When it comes to these two facts, Serverless Framework has both an excellent documentation and an increasingly supportive community (e.g.: Gitter Community & Forum).

A bonus fact is that it is an open-source project in continuous development — repository here.

Unit testing

The fact of having our serverless logic developed before deployment let us write the corresponding unit test cases so as to ensure that the code is doing what it should without having to manually test it each time we introduce a change.

Local testing (AWS only)

For the development of features in a rapid and safe way, we love to have different test stages and techniques. Serverless Framework offers us a way to locally invoke our built functions so we can manually test them before their deployment.

Multiple cloud hosting platform support

Currently supported: AWS, Microsoft Azure, Apache OpenWhisk and Google Cloud.

Usage

Serverless Framework allows you to create not merely functions but also a complete RESTful API.

For now, we are going to show you our approach on how we organize a node ‘lambdas’ project — just functions — using the Serverless Framework.

Before continuing, keep in mind that these functions — the way we are using them — are all related to one project, so we are using a unique Serverless Service for managing all the configurations centralized in one serverless.yml file.

Having said that, let us show you a generic-project structure, and then analyze each of its components.

Sample project structure

Sample Project Structure

common_services folder

Inside this folder, we place our common services functions and modules that may be used along with different services. For example, a logging service module.

node_modules folder

This folder itself should not be ignored, but its content should. This allows us to create and upload symbolic links with the same folder’s name to the repository.

You may be asking why would we need to define symbolic links to the node_modules folder. When deploying a Serverless Service, everything referenced by any Serverless Function will be deployed as well. However, for this to happen, all these resources must be at the same level as the serverless.yml file or deeper in the hierarchy, so we need to reference the node_modules folder from the outer scope inside here.

serverless-*.env.yml files

We define all environmental variables inside these files. There should be one per environment (following the example, our environment would be dev and prod).

There are two possible approaches here:

  1. Define only one global file per environment and define all common and function variables inside it.
  2. Define one global file per environment for common variables and define one service-local file per environment for function variables.

The first option offers you a centralized way to manage all environmental variables related to one environment, making it easier for sharing these secret files (they should be ignored!). The second one requires you to create more files per service — which makes them a little bit indirect (not harder if you know your tools) to share -, but it makes it possible to concretely implement the separation of concerns between each function.

We choose the second approach as it offers a more scalable treatment of environmental variables.

Having said that, it’s worth pointing out that you would still be able to access your environmental variables just like you would in any node project (via the process.env variable).

services folder

Inside this folder, we locate each service — or function — folder. Following the structure example, we have two services/functions: service1 and service2.

Note that service1 has a symbolic link to both common_services folders, while service2 only has a symbolic link to one function of the common_service folder inside the selected_common_services folder. This is shown to highlight the fact that we can include whatever dependencies we need depending on each case.

Also, you may have also found that service1 has its own environmental variables files, while service2 does not. This does not mean that service2 functions do not have defined any environmental variable, it means that there’s no particular environmental variable defined for the service2 functions. It may have global environmental variables configured or not, depending on the provider.environment field of the serverless.yml file content.

serverless.yml files

Let’s take a look at the serverless.yml file where we configured our serverless project.

We stress that function2 does not have any environment variables defined at the function level. However, in this case, it still has the global or service environmental variables available for usage.

local_tests folders

They contain JSON files with objects for simulating incoming events for the defined functions using the invoke local feature available through serverless for the AWS provider. Note that these files should not contain sensitive information as they may be version controlled or pushed to a central repository.

Usage sample

You can check a sample implementation of this project’s structure here.

So, the now-mature Serverless Framework has huge advantages when it comes to managing serverless resources. It can be used to implement (almost) any serverless solution in many different cloud hosting platforms — including the most popular ones — and it is flexible enough to add custom or third-parties plugins and to structure the project so as to fit your needs.

Although our own-build tool will be available here, we highly recommend and encourage you to use the Serverless Framework solution from now on.

Please do not hesitate to contact me for any feedback, suggestion or question you may have.

Posted by Matías Comercio (matias.comercio@wolox.com.ar)

www.wolox.com.ar

--

--

Matías Comercio
Wolox

Computer engineering student. Full stack developer. Technology lover. Enthusiastic and passionate worker. ‘Why?’ is my favourite question.