Going Serverless

Scott Kamps-Duac
Pico
Published in
4 min readDec 12, 2017

--

While eating out in the United States, it is common to be assigned a server for the duration of your meal. This server might be assisted by additional workers, such as water attendants, food runners, and bus persons — but a majority of the high-value work is taken care of by the assigned server. While this service model has some benefits, it more often than not leads to mis-communications, bottlenecks, and a non-scalable dining experience.

I was fortunate to live in Busan, South Korea for one year. Eating out at any Korean restaurant is a beautiful experience of efficiency and scalability. Each table is assigned a number, with a physical button placed on the table. When the button is pressed by a customer, the table number flashes on a display in the restaurant, notifying the entire service team that service has been requested at the table. A server runs over to the table (quite literally more often than not), collects the context of the request, and marks the cost of the request on a tab kept in possession of the customer. When the meal is over, the customer takes the bill to a cashier who collects payment.

I always loved this service model, even before my software engineering days. I never found myself looking over my shoulder waiting to try and catch my servers attention. When the restaurant was at peak capacity, I found relief in knowing that my table number was queued up for service. I had a log of previous requests available to me, and could end service on my own time by paying the bill and walking out.

The Emergence of Functions as a Service

A major advance in distributed computing was the introduction of operating system virtualization by use of a hypervisor to run multiple operating system instances on a single metal machine. This allowed enterprise data centers to drastically increase the utilization and efficiencies of their hardware, while cloud technologies emerged as Infrastructure as a Service providers. Containerization improved on virtualization technologies by minimizing low-level dependencies and reducing “spin-up” times.

A recent offering by cloud providers called Functions as a Service lends itself to the misnomered infrastructure pattern widely known as “Serverless Computing”. While Serverless certainly involves the use of servers to compute requests, minimal configuration and automatic provisioning of computation resources invites a more dedicated focus on functional code. The encapsulation of logic in small functions and defined event triggers embrace a new era of micro-service oriented system architecture.

At Pico, we are able to leverage Functions as a Service to deliver a product functionality that would otherwise be costly, inefficient, and non-agile. Our servers interact with all articles read on a client’s digital news outlet. With every publisher we onboard we handle a significant increased load on our servers. EC2 servers managed by Elastic Beanstalk have served us well, but recent announcements from AWS have solidified the excitement that Serverless is an incredible solution strategy for scaling our product globally. We will instantly increase our infrastructure throughput based on demand, and will only pay for every millisecond our Lambda functions run, eliminating wasted computing power.

The Stack

Our team decided to proof of concept what a hybrid serverless architecture would mean for us by creating an AWS Lambda function to perform a scheduled Mailchimp email list sync. For this, our Lambda function needed access to the internet, access to an AWS RDS instance, and to be triggered by a daily CloudWatch event.

We chose to use the Serverless Framework to assist with the zipping, CloudFormation creation, and deployment of our lambda functions. Working in a node.js environment, we have a number of vendor libraries that need to be imported into the zip file as dependencies. For this, we use the serverless-webpack plugin to assist with bundling our function code and its dependencies. By adding the following configuration parameter, webpack will create individual bundles for each function, resulting in minimialistic lambda zips with tree-shaking.


# serverless.yml

package:
individually: true

At this point, the directory structure for our Node API project might look something like this:


- ./

— functions
mailchimp.js
serverless.yml
webpack.config.js

app.js
package.json
.gitignore

I believe a single Serverless configuration will suffice to supplement our Node API. Since Serverless allows for individual functions to be declared within its configuration and the webpack bundle tree-shaking is working, there is no need to add additional Serverless roots at this time. As our utilization of function services matures, I imagine we might separate out certain functions based on configuration requirements and service type.

Serverless References

I look forward to chatting a bit more about the technical implementation of our serverless stack and any changes influenced by the recent AWS announcements soon!

--

--