Intro to the Serverless Framework: Building an API

Eddie Kollar
bitstaq
Published in
9 min readJun 16, 2017

--

Background

Cloud computing services have been revolutionary to how software systems are developed and deployed. One growing trend in this area has been the rise in popularity of serverless architecture. In the past, serverless described an application architecture that heavily relied on 3rd party services that manage server-side logic and state, typically referred to as Backend-As-A-Services or BaaS. However, today the term is refers to server side logic that is run in stateless, event triggered, and ephemeral compute containers that are managed by a 3rd party and is commonly called Function-As-A-Service or FaaS.

AWS Lambda is widely seen as the pioneer of the serverless space but all of the major cloud players now have competing products in the space. Frameworks like Serverless, Apex, and Chalice are built on top of the various serverless platforms in order to extend their functionality and make serverless products/platforms easier to work with.

Benefits of Serverless Architecture

The serverless style of architecture comes with a variety of benefits, namely:

  • Easier operational management as the platform separates the application from the infrastructure that it is running on.
  • Innovation happens quicker because of the aforementioned separation allows for a focus on the application logic rather than concerns stemming from systems engineering of the infrastructure.
  • Reduced operations costs as you only pay for the time and resources needed to execute a function.

Compared to a traditional server-side setup, the gains of these benefits can be understood in the context of the development life cycle. When deploying a new feature or bug fix, the whole backend or service where that code appears must temporarily be down for the update to be applied. Any system downtime can result in the loss of data and a poor user experience. With redundancy and the right deployment configurations, this can be mitigated. However, upkeep of such a setup incurs cost in server resources, its own development and maintenance, and dedicated personnel time.

With serverless architecture, developers can apply updates piecemeal with none of the risks of downtime as each function is an independent resource. This encourages a modular style of writing code that is recommended as a best practice for development and testing. As an independent resource, the code is run only when called, meaning there is no cost for idly running.

The Project

In this article, we will be using the Serverless Framework, an open-source application framework, to build serverless architectures on AWS Lambda and other cloud based services. We are going to build a secure API for a ToDo application and write the server side functions to run on Lambda. Many tutorials for front end tools and frameworks use the ToDo application for teaching their basic concepts. We want to consider what the setup of a backend for such an application could look like to process server-side logic such as storing and accessing data.

Requirements

Please make sure that you have installed Node.js on your computer (to be able to follow along). Following the directions in the Serverless documentation, you can install the command line tool serverless. Please note that at the time of the writing of this article there is a known issue with Node.js version 8.0.

Getting Started

To get an idea about the basic structure of a Serverless applications, use the command line tool to create an empty project.

eddie:serverless$ serverless create-template aws-nodejs --path serverless-demo
Serverless: Generating boilerplate…
Serverless: Generating boilerplate in “/Users/ekollar/Development/serverless/serverless-demo”
_______ __
| _ .-----.----.--.--.-----.----| .-----.-----.-----.
| |___| -__| _| | | -__| _| | -__|__ --|__ --|
|____ |_____|__| \___/|_____|__| |__|_____|_____|_____|
| | | The Serverless Application Framework
| | serverless.com, v1.15.1
-------'
Serverless: Successfully generated boilerplate for template: "aws-nodejs"

Looking at the directory structure we can see that the boilerplate include just two files:

eddie:serverless$ tree serverless-demo
serverless-demo
├── handler.js
└── serverless.yml

Inside handler.js we see the code to managed and executed in Lambda:

A look at the configuration file serverless.yml shows several commented lines generated where we can see the options for various cloud services. Below are only the uncommented lines that configure this demo project:

Code Deployment

The service section describes the name of project; provider contains the configuration options for cloud service provider; functions sections contains configurations relating to what functions are available: their naming, what code they relate to, and what events can access them.
Our next step is to go inside the project directory where we’ll use the command line tool to deploy this function on AWS:

ekollar:serverless-demo$ serverless deploy -v
Serverless: Packaging service…
Serverless: Creating Stack…
Serverless: Checking Stack create progress…
CloudFormation — CREATE_IN_PROGRESS — AWS::CloudFormation::Stack — serverless-demo-dev
CloudFormation — CREATE_IN_PROGRESS — AWS::S3::Bucket — ServerlessDeploymentBucket
CloudFormation — CREATE_IN_PROGRESS — AWS::S3::Bucket — ServerlessDeploymentBucket
CloudFormation — CREATE_COMPLETE — AWS::S3::Bucket — ServerlessDeploymentBucket
CloudFormation — CREATE_COMPLETE — AWS::CloudFormation::Stack — serverless-demo-dev
Serverless: Stack create finished…
Serverless: Uploading CloudFormation file to S3…
Serverless: Uploading artifacts…
Serverless: Uploading service .zip file to S3 (409 B)…
Serverless: Validating template…
Serverless: Updating Stack…
Serverless: Checking Stack update progress…
CloudFormation — UPDATE_IN_PROGRESS — AWS::CloudFormation::Stack — serverless-demo-dev
CloudFormation — CREATE_IN_PROGRESS — AWS::Logs::LogGroup — HelloLogGroup
CloudFormation — CREATE_IN_PROGRESS — AWS::Logs::LogGroup — HelloLogGroup
CloudFormation — CREATE_COMPLETE — AWS::Logs::LogGroup — HelloLogGroup
CloudFormation — CREATE_IN_PROGRESS — AWS::IAM::Role — IamRoleLambdaExecution
CloudFormation — CREATE_IN_PROGRESS — AWS::IAM::Role — IamRoleLambdaExecution
CloudFormation — CREATE_COMPLETE — AWS::IAM::Role — IamRoleLambdaExecution
CloudFormation — CREATE_IN_PROGRESS — AWS::Lambda::Function — HelloLambdaFunction
CloudFormation — CREATE_IN_PROGRESS — AWS::Lambda::Function — HelloLambdaFunction
CloudFormation — CREATE_COMPLETE — AWS::Lambda::Function — HelloLambdaFunction
CloudFormation — CREATE_IN_PROGRESS — AWS::Lambda::Version — HelloLambdaVersionLLztSdO2tYQbTC7ic22ZpdDWkh9zLOvbnQsXl4gZ0
CloudFormation — CREATE_IN_PROGRESS — AWS::Lambda::Version — HelloLambdaVersionLLztSdO2tYQbTC7ic22ZpdDWkh9zLOvbnQsXl4gZ0
CloudFormation — CREATE_COMPLETE — AWS::Lambda::Version — HelloLambdaVersionLLztSdO2tYQbTC7ic22ZpdDWkh9zLOvbnQsXl4gZ0
CloudFormation — UPDATE_COMPLETE_CLEANUP_IN_PROGRESS — AWS::CloudFormation::Stack — api-service-dev
CloudFormation — UPDATE_COMPLETE — AWS::CloudFormation::Stack — serverless-demo-dev
Serverless: Stack update finished…
Service Information
service: serverless-demo
stage: dev
region: us-east-1
api keys:
None
endpoints:
None
functions:
hello: serverless-demo-dev-hello
Stack Outputs
HelloLambdaFunctionQualifiedArn: HelloLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-1:743238559645:function:serverless-demo-dev-hello:1
ServerlessDeploymentBucketName: serverless-demo-dev-serverlessdeploymentbucket-zi9rpv2yn3uc

Diving Into the Output

Diving into this output we learn a few things about how a serverless deployment is configured on the AWS infrastructure. There are three services being utilized by this: Cloudformation, S3, and Lambda. Cloudformation is a platform that allows users to create and manage a collection of related AWS resources. S3 is short for Simple Storage Service, which is an object store with a web interface which allows for storage and retrieval of data. This is where the code will reside, in a in a designated bucket named serverless-demo-dev-serverlessdeploymentbucket-zi9rpv2yn3uc.

The Service Information section looks familiar with some addition information to the configurations from the serverless.yaml file. The keys stage, region, and api keys are in fact default configurations that can be set up in that YAML file. The staging environment the code will be deployed to is defined by stage, region defines which geographical region of the AWS infrastructure the code will reside on, and lastly we have api keys that list out the names of keys to be used to securely call our Lambda functions. In a future step, we will set this up.

The last bit of information is the ARN (Amazon Resource Name) for the Lambda function which helps to uniquely identify resources in AWS.
To see what is returned from a call to this ARN we can run the command line Serverless tool to call the function directly:

eddie:serverless-demo$ serverless invoke — function hello — log
{
“statusCode”: 200,
“body”: “{\”message\”:\”Go Serverless v1.0! Your function executed successfully!\”,\”input\”:{}}”
}
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
START RequestId: c5453e4c-4e02–11e7-af1d-fb217ce93c95 Version: $LATEST
END RequestId: c5453e4c-4e02–11e7-af1d-fb217ce93c95
REPORT RequestId: c5453e4c-4e02–11e7-af1d-fb217ce93c95 Duration: 1.70 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 19 MB

The JSON data is what is returned to requester of the Lambda function and in our example we have an HTTP response. We’ll be creating an HTTP endpoint configuration for this function so that an application external to AWS can call it.
In our serverless.yml file we will add an event for the function:

functions:
hello:
handler: handler.hello
events:
- http:
path: sayhello
method: get

After this change we need to deploy again:
eddie:serverless-demo$ serverless deploy -v

In the out you can see a number of provisioning and configuration steps taking place that we won’t go into detail. You will notice that now there is a new service being used, ApiGateway. As the name implies this service allows for the configuration and use of APIs.

endpoints:
GET — https://utg4c2yny6.execute-api.us-east-1.amazonaws.com/dev/sayhello

Running request on this endpoint will give you the full response along with the message from the direct call to the function.

curl https://utg4c2yny6.execute-api.us-east-1.amazonaws.com/dev/sayhello

As we know it’s not good practice to have insecure endpoints, we are going to add configuration to generate an API key and secure our call to sayhello. Here is what the full revised serverless.yml file will look like:

service: serverless-demo
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
apiKeys:
- secret
functions:
hello:
handler: handler.hello
events:
- http:
path: sayhello
method: get
private: true

Our next deploy will update the configuration and in the Service Information we will see the API key generated by AWS:

api keys:
secret: Dt7CiOXofX3TeRvxZxOfe11RVwRZVeSp7OhNXsIv

If we try running the curl command again we know get an error message:
{"message":"Forbidden"}

Amazon API Gateway Configuration

At the time of the writing of this article, the team working on Serverless is implementing automation to have the API key associated to endpoints. For now let me walk you through how to create a Usage Plan for you endpoint, this defines configuration that defines throttling and quota limit on each API key.

Log into your AWS console and navigate to the page for API Gateway. Select Usage Plans in the left side menu. When you click on the Create button a form will popup. Below you can see my configurations. Feel free to adjust them as needed:

Next we add the associated API stage, which in our case will be serverless-demo-dev:

We’ve already generated an API key through the serverless command line tool earlier, but in this step of the wizard we will look it up and associate it with the Usage Plan:

When you’re done you will see the configuration page for the new Usage Plan:

To test that our key does in fact work add we can now add it as a parameter to the call:

curl https://utg4c2yny6.execute-api.us-east-1.amazonaws.com/dev/sayhello — header "x-api-key: Dt7CiOXofX3TeRvxZxOfe11RVwRZVeSp7OhNXsIv"

You should receive an HTTP response similar to when the endpoint was insecure.

Creating ToDo Endpoint

Now we ready to mock up the endpoints for a To Do application. We are interested in providing basic CRUD (Create Read Update Delete) functionality that call be called by updating the handler.js file.

Updating the functions section of the YAML file:

With this last deploy we now have fully mocked up API endpoints:

POST — https://utg4c2yny6.execute-api.us-east-1.amazonaws.com/dev/todosGET — https://utg4c2yny6.execute-api.us-east-1.amazonaws.com/dev/todosGET — https://utg4c2yny6.execute-api.us-east-1.amazonaws.com/dev/todos/{id}PUT — https://utg4c2yny6.execute-api.us-east-1.amazonaws.com/dev/todos/{id}DELETE - https://utg4c2yny6.execute-api.us-east-1.amazonaws.com/dev/todos/{id}functions:
create: serverless-demo-dev-create
list: serverless-demo-dev-list
get: serverless-demo-dev-get
update: serverless-demo-dev-update
delete: serverless-demo-dev-delete

Conclusion
Congratulations! We’ve successfully gone through the basics of creating an API hosted on AWS using the serverless command line tool. You know a little about the cloud services used to architect this backend. The next steps are to add persistent storage for our ToDo application.

If you would like the full code from this project please visit the GitHub repository.

This article was originally posted on Develop Intelligence. They are the creators of appendTo, which offers JavaScript training courses for teams.

--

--

Eddie Kollar
bitstaq

Curiosity and contemplation come naturally to me. Life goal: die empty. Unleashing the artist within, my life is my canvas.