6 Lessons Learned from Going Serverless

A shortlist of the serverless tools, tips, and tricks I wish I knew

Dan Van Brunt
A Cloud Guru
8 min readMar 1, 2018

--

The battle scars are still fresh from my first serverless project. There were quite a few things I took for granted based on my prior experience using EC2 instances — and those assumptions cost me some time and effort.

I learned a lot from deploying a serverless application — including a few tools and tricks that would’ve saved me some wasted energy. Here’s the shortlist of my favorite serverless solutions from the experience — hopefully these tips will make it easier for others to breakaway from instances and go serverless.

#1: The Serverless Framework

There are three things that often get confused when talking about serverless. Before we get started, let’s make a clear distinction:

The Methodology
Serverless architecture is a methodology that is provider agnostic. It does not mean there are no servers involved. It just means you aren’t managing servers — it’s the provider’s responsibility as part of the service.

The Tool
The Serverless Framework is a multi-provider command line tool that automates and abstracts away a metric ton of manual tasks otherwise needed to develop, test and manage many of the services behind a serverless stack.

The Company
Serverless Inc. is the name of the company that now develops and maintains The Serverless Framework. Austen Collins, Founder & CEO, was the original creator of the Serverless Framework.

At the time of writing, the Serverless Framework supports 4 major cloud providers — Amazon Web Services, Google Cloud Platform, Microsoft Azure and IBM OpenWhisk.

The framework works by creating a relatively simple serverless.yml file that defines your functions, events and resources. The Serverless Framework (SLS) will then deploy and provision everything with a simple sls deploy command. You can also do things like tail Lambda CloudWatch logs with sls logs -f functionName directly into your terminal.

Example Serverless Framework YAML

When using AWS as a provider, SLS converts your YAML into an AWS CloudFormation (CFN) template, uploads it to an auto-created deploy-bucket and then kickstarts the CFN stack creation.

So far, working with SLS has been amazing. Even though it is still relatively new to the scene, the framework has quickly become our de facto tool to create, deploy, and test our serverless applications.

The framework is really the catalyst that got us hooked on serverless. To help get started, check out some of their examples on GitHub.

#2: Use SSM and environment variables

Simple Systems Manager (SSM) is a great cloud parameter store that can be used across multiple stacks. It can also solve a few common CFN challenges — like cyclical dependencies and deploy-time limitations.

  • To get started, create an SSM resource and set it equal to some value:
Implementing SSM resource in a CloudFormation template
  • Then, use that parameter in your lambda functions
Reading a SSM parameter from a Lambda function

SSM is fairly simple — and a very powerful way to manage variables. For what it’s worth, you can also use the Serverless Framework to read and create SSM parameters.

It’s important to note that SSM should not be used instead of env vars — they each are used to solve different problems. Environment variables are better when you know something at deploy timeand when the values don’t need to be encrypted. The environment variables are also near instantaneous to access inside your lambda functions.

SSM is a better choice when you don’t know a value at deploy time — or if there are cyclical dependency issues. You can also encrypt SSM values in place using KMS. The downside of SSM is that it requires a call out to the service. Although it’s usually very fast, it could make the difference in your scenario.

#3: Use Lambda@Edge to configure Single Page Apps

Proper HTTP statuses and pretty urls are definitely something that is easy to take for granted. When a page exists, it should return a 200 OK. When the page or an asset is missing, it should return a 404.

In addition, URL’s should look like this:https://www.domain.com/path/, and not this:https://www.domain.com/path/index.html. The URLs also shouldn’t require 30 redirects.

There things can traditionally be accomplished on a web server/proxy (httpd, nginx) with an .htaccess file. Simple — right?

Unfortunately, Amazon’s S3's Static Website Hosting currently only partially supports these features — which doesn’t really cut it for production. The main reason is how S3 handles your index.html file.

Many articles on Single Page Apps (SPA) suggest setting both the Index document and Error document to your index.html.

The Index Document and Error Document dictate which file to serve up when a request is made to a directory /path vs a file /path/file.html, and what to do if there is a request for a file that does not exist.

In the case of a request to https://www.domain.com/somePath/, it will result in your site loading for the end-user as intended. However, if you inspect the network panel, the response returned from the server is a 404 not found.

This is because the folder /somePath/ wasn’t found in S3, so it served up your chosen index.html as the Error Document — which in turn handles the route and provides the proper page to the user.

There are a number of reasons it’s bad to have all the non-root routes of your site respond as a 404. Not the least of which are SEO score downgrades, SEM thinking your site is down and removing the ad, and poor user experience.

However, is it just fair to say that it’s just plain ugly?

Enter Lamdba@Edge
Using Lambda@Edge allows you to associate a custom Lambda function to execute with each request coming into CloudFront (CF). Within the function’s code, you make any modifications you like to the request — similar to what you might do in a .htaccess file.

Lambda@Edge URL Rewriting

Now, when a request is made tohttps://www.domain.com/somePath/ the Lambda first checks if the path is to a directory or an HTML file — and if so, rewrites that request to point to your index.html file. Otherwise, if the request is for an asset it continues on untouched to S3, and succeeds or fails based correctly on whether or not the asset exists.

This also works for case-insensitive routes such as domain.com/MyPath/ -> domain.com/mypath/ and canonical rewrites domain.com -> www.domain.com . For the canonical solution, standard S3 Static Website Hosting would have you setup two CloudFront distributions backed by a bucket that redirects you over to the actual www bucket. If that sounds a bit convoluted, you’re probably not alone.

With Lambda@Edge, you can just rewrite the requests in front of your app as you normally would — but without the need of a web server or proxy instance.

When using CloudFront and S3, you may also want to restrict traffic to the bucket to only that coming from the CF CDN. This prevents traffic from accessing the bucket directly — which can impact costs. AWS does not double dip with traffic from CDN, so it’s free on the S3 side.

#4: How to delete Lambda@Edge functions

Prior to JAN-26-2018, you could not delete a Lambda@Edge function in any way shape or form. Yup. Thats right — once created, you had zero ways to delete it.

Removing the resource from your CloudFormation template would just cause your stack to error. Trying to delete them manually would error as well. Not that it costs you anything to have unused Lambdas laying around, but that quickly gets pretty messy in the console.

There seems to be a few other developers that also thought this was odd — so AWS quickly found a way to allow us to delete these manually. Here is it:

  1. Disassociate the resource from any CloudFront distributions
    Previously dissociated Lambda@Edge functions (prior to 1/26) won’t yet be able to be deleted. It takes about 2hours for the system to clean up things such as your replica functions.
  2. Delete it manually
    Once the replica functions are cleaned up, you can delete it manually. Not sure why Lambda@Edge seems to have created such issues for something like deletions but it sounds like ironing this one completely out is one of AWS’s priorities. I just hope they provide a way to allow CloudFormation stacks to delete without error or blocking when tearing things down. For the moment though — manual deletes it is.

If you’d like to dive deeper into Lambda@Edge, you can also checkout my AWS Lambda@Edge course.

#5: Use WAF to white/blacklist your serverless site

Since there is no typical web/proxy server, putting development sites behind a simple Basic Auth is not possible. However, with AWS WAF you can setup all kinds of rules to block traffic.

AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules.

Here’s a very small example of the power behind WAF —I’d highly recommend to explore this service for your own site.

Example WAF Config to Whitelist access to a single IP

#6: Build your own CloudFormation Resources

Not all AWS Services are supported in CloudFormation — and some are only partially supported. You may also find yourself wishing that some service outside of AWS could be used in CloudFormation templates. Well, both of these scenarios can be solved with CloudFormation Custom Resources.

Custom Resources are really not much more than a Lambda function that acts on behalf of your CloudFormation templates to Create, Update and Delete a specific type of resource that you define. This Lambda function lives independent of the deploys that might use it.

For an example, take AWS Elastic Transcoder. There is currently no official CloudFormation support for the Pipeline and Preset resources needed. However, with little more then 40 lines of code and the help of the AWS JS SDK, you can custom build that support.

There is also a great helper/boilerplate Custom Resource lib called cfn-lambda, that helps with a number of standard tasks.

Taking this same concept — now imagine creating a custom resource for third-party services like Auth0. Even with services outside of AWS, you can now orchestrate your whole stack from CloudFormation. Pretty powerful stuff.

Happy Deploying!

So that’s it — just a few things that I’ve learned the hard way while deploying serverless solutions. Hopefully this will save you some time and effort!

Thanks for taking the time to read this article. I’d be really interesting in hearing your feedback, suggestions, and experiences in the comments below!

--

--

Dan Van Brunt
A Cloud Guru

Sr Director of Technology @ Klick Inc. Dad, Developer, Automation Freak, and Process Evangelist. Not always in that order.