Decoding AWS Lambda - the Serverless Architecture

Swapnil Kant
Fasal Engineering
Published in
6 min readApr 19, 2021

Amazon AWS is the most popular cloud service provider in the world that makes the work of software engineers easier by providing numerous features and functions necessary to host, monitor, and manage your cloud-based web applications. Serverless is one such technology that is gaining popularity these days and learning how to work with serverless can be very exciting and fun.

In this article, I am going to dig deeper into serverless and I will also show you how to execute a serverless program on our local machine.

So, let’s begin!

What is Serverless?

“It’s built on next-generation public cloud services that auto-scale and charge only when used. When scale, capacity planning & cost management are automated, the result is software that’s easier to build, maintain, and often up to 99% cheaper.”

You can think of serverless as a cloud-native development model, broadly speaking serverless can be thought of as where you are actually not responsible for managing and provisioning of the service, it is something outsourced to the cloud providers (be it any AWS, Microsoft Azure, or any other service). In serverless, the developers have to just pack their code and run without caring about how to manage the servers.

What is AWS Lambda?

Talking particularly about AWS, we have something called — AWS lambda. It is a compute service provided by AWS that lets you run your code without thinking of how to manage your servers and also takes care of scaling needs.

It enables you to run your code regardless of any specific type of application or backend service and is completely responsible for including server and operating system maintenance, capacity provisioning, automatic scaling, code monitoring, and logging, etc.

The major benefit of using AWS lambda is that you just have to pay for the time your code executes.

Now, let us look at different components that define serverless —

  1. Services
    A service can be thought of as a project and it is a place where you define your AWS lambda functions, this is the place from where your function gets triggered and execution of your code takes place inside the handler file, the file is commonly known as serverless.yml. It contains the serverless service, functions in the service, your service provider where your code is deployed (be it any AWS, Microsoft Azure), custom plugins, events that trigger each function to execute, and the other resources being serverless variables and database.
  2. Handler:
    Commonly known by the name handler.js it is your main file where you write your function which has to be triggered from serverless.yml, this function contains necessary variables and other functions which could be called within the project.
  3. Event:
    While the two files serverless.yml and handler.js could be created by using the command serverless create --template aws-nodejs --path myFile where myFile is the folder where we need to create the two files or you can simply create them manually on your own. The event.json is a file that is not created by default but we have to create it manually. The file contains all the necessary data to be passed to your handler function as parameters.

Decoding serverless.yml

Now, let us have a look at our serverless.yml file

Fig 1.1: serverless.yml

Here you can have a look at all these properties, the service is the name of your project, provider property defines the service that you are using and other permissions that are needed to interact with the AWS infrastructure resources and the functions property defines the function in your handler.js file which gets triggered and in the above case, our function name is `submit`.

Decoding event.json

Now, let us have a look at the event.json file

Fig 2.2: event.js

event.json here contains Records in the form of the JSON object and the data inside them are arranged in the form of arrays.

Decoding handler.js

Now, let us have a look at our handler.js file and understand how it works!

Fig 3.3: handler.js

Here you could now notice the statement module.exports.submit = (event, context, callback) this could also be written as export const submit = async(event, context, callback) where you see the three arguments:

  1. event:
    This parameter is the one that helps you to access your data from the event.json file which is required by your code it looks like the one shown in fig 2.2 and can be accessed as shown in fig 3.3 (just very similar to how JSON objects are read).
  2. callback:
    This contains the two arguments and executes them, in case of error, and if no error then our response is printed in the console.
  3. context:
    It contains various properties which are defined for the function, you could simply console.log(context) inside your handler function and see the following result
Fig 4.4: context argument in handler.js file

Now, that we have understood the basics of serverless, we can easily understand that how the different files in the folder are linked with each other and how the flow of data is taking place.

Running the serverless program on your local machine could be done by using the command

serverless invoke local — function calculateFunction — path event.json

where calculateFunction will be the name of your function property and should be the same under the function property inside the serverless.yml file and path as the given event.json (in your case it may be different).

The program on serverless which we discussed until now using the above code on execution gives the final output as

Fig 5.5: Output

Here you could see a long message and a number at the end which is 183 and is the total sum of the three numbers that we have in our event.json file.

One of the pro tips while you create a serverless project would be — use your terminal to get the result and check your code flow to avoid any mistakes or errors.

Pros and cons of serverless architecture

Some of the important features of the serverless architecture are:

  1. It requires nearly zero system administration.
  2. Enables easier operational management.
  3. The cloud service makes the system scalable and you (the developer) don't have to look and bang your head for it!
  4. Enables faster setup of your project.

Some of the cons of serverless architecture are:

  1. Testing your code becomes a bit tricky local.
  2. Increase in complexity.
  3. Restrictions on the local state.
  4. There is always a lack of operational tools.

My views on serverless are very simple as compared to the traditional server. It allows you to scale up your system, manage CPU and resource usage automatically and at a lesser cost. When you are working with serverless you need not worry about how to handle your system and at the same time it is very fast and easy to manage, you just have to keep your focus on your code and development part.

Also Read —

--

--

Swapnil Kant
Fasal Engineering

Hi, I am Swapnil Kant, an avid programmer, and a full-time learner! One who is highly interested in Algorithm Optimization and Development