Start with AWS Serverless

First dive into Serverless. Going from installing containers for everything and worry about scale all the time — to something completely different.

According to 2016 re:invent talk about Serverless Architectural patterns and Best Practices on Youtube, AWS Lambda are at the very heart of every serverless architecture.

Server architectures generation (ec2 being the oldest obviously)
  • Functions are the unit of deployment and scale
  • Scales/request — user cannot over or under-provision
  • Never pay for idle
  • Skip the boring/hard parts

Before diving into the different design patterns, here is a short intro to Lambdas and Messaging. I consider those components to be the most important thing here.

Amazon Lambda functions

In a “normal” backend we have routing (api) and we have handlers (functions) to handle the requests and provide a response. With amazon, we get both of those things “as-a-service” with AWS API Gateway and AWS Lambda.

AWS Lambda is an easily scale functions you load to production, instead of a dedicated server(s). Lambda runs when its triggered, and this is what we pay for — no idle time. We need API-Gateway for the routing to those functions.

You init a serverless template locally with those commands for an example:

npm i -g serverless
serverless create -t aws-nodejs

The serverless create command creates the files you need to define the Lambda. There is no point to elaborate as there are great tutorials on how to use serverless toolset.

For more advanced reading, I found a great blog post that goes along with a GitHub repo which will definitely be good starting point for later serverless projects —

More advanced issues

Lambda functions scale horizontally very well. You don’t pay for an idle time like you would have with “On EC2” approach. (EC2 is always on, even when you don’t need it to do anything).

Lambda is Stateless — meaning everything you create whithin the scope of function is not promised to be available again after the 5 minute binding.

Lambdas have a “cold start” stage where AWS are loading some of the program (probably the “imports”) — so the suggestions are to make the code as small as possible. Also, if you don’t need to attach the Lambda to a VPC or other resources — don’t do it because it adds to the cold start loading time.

AWS Messaging/Queueing

Queueing (or Messaging) is key to backend scaled and decoupled architecture. Basically, it means producers send messages to consumers via durable buffers.

SQS — Simple Queueing Service — comes in several flavors:

  • Standard: scale easily as the traffic grows. but may have duplicated messages and out-of-order.
  • Fifo: in order and no duplicates. you may create groups (sub-queues).

SNS — Simple Notification Service — pub/sub architecture.

You can connect a topic (tag for a notification queue) to Lambda/SQS/HTTP end point (your API).

Both those services (SQS and SNS) are very reliable.

Streams vs Queues — a Stream is not deleted after processing the messages. The subscriber/worker can go back and forth with a cursor as much as it wants (like a video stream). Queues are not persistent, after pop’in the message its gone from the queue.

Amazon Kinesis — An implementation of Streams on AWS:

Watch this video to learn more:

And it comes as a Family of products:

Kinesis Streams are something like a “managed Kafka cluster “.

Basically Kinesis stores messages as Streams (can be anything really but its limited in size). The basic quantity of scale is “shard” — using a Partition Key (assigned by the Producer). You scale by adding or removing shards.

Kinesis vs Sqs vs Sns

Pattern 1 — webapp

Serverless pattern 1 is the most common use case for any company — store your Frontend and your Backend.

The drawing is self explanatory. CloudFront serves static web assets from S3. Dynamic services provided from API gateway and Lambdas that store consistent data on DynamoDB.

There are great security tools to secure CloudFront and S3. API gateway has Throttling, caching and usage plans but the cool thing is authentication and autorization — you can make sure the request comes from an authenticated source with IAM credetntials, Cognito and some custom auth strategies like JWT its possible.

Deployment is possible with a new extension of CloudFormation called SAM that provides a good way to package and deploy.

CloudFormation — is a provisioning tool specifically for AWS. I’m always trying to avoid proprietary solutions, but If we are talking about serverless for AWS this entire talk is about proprietary services delivered by Amazon, so it really doesn’t matter anymore. Basically, very similar to Terraform, you write your template provisioning file and load it to AWS CloudFormation to do the actual infrastructure configuration for you.

Pattern 2 — Batch processing

Heavy duty batch processing pattern looks like

Can’t let Lambda do heavy duty job because of the 5min limit so we split the task (time/size/etc). Than we map the tasks to dynamo for later deploy of processing with lambda reducers.

DynamoDB — “Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability”. I guess they suggest using it for fast access for the Lambdas (and I guess it’s also temporary storage, just to move the data between Lambdas. They could use Kinesis here for example instead of Dynamo).

Pattern 3 — Stream processing

There are lots physical devices in the field, IoT style. and they all stream data together in a sparky fashion to our backend. The backend is required to process and store it in realtime. Very important is Message durability and ordering, you can’t lose messages because they may contain business critical data.

The use case depicted here is IoT backend that gets the devices (sensors) temperatures and we need to aggregate results of some function of those measurements every 5 minutes.

In this scenario all devices spit the measurements into a kinesis stream which Lambdas uses as an event-source and store the intermediate results in S3 (which can be replaced here by any db). A CloudWatch cron triggers a call to a Scheduled Dispatcher Lambda every 5 minutes and this will aggregate the results on a 5 minute basis to a S3 results bucket as a final destination.

Fanout pattern may be required

There are limitations to the way you may use Kinesis. The basic unit of scale for Kinesis throughput is “shard” (a shard has a certain amount of throughput capacity). The number of parallel Lambdas that we will get is equal to the number of shards in the Kinesis. This means that each Lambda must not bottleneck the shard, and if this is impossible you need to consider some architecture to load balance (fanout).

a “shard” capacity

This design pattern is very simple — one Lambda “cleans” the shards as fast as it can and moves the messages to the processors Lambdas — this way making sure the shard is never bottlenecked + you can add as many processor Lambdas as you need. The downside is that the benefit of loosing Kinesis (message ordering) is lost — ordering is not guaranteed like this.

RCoil is a project that is doing parallel synchronous Lambda calls easily.

These are some advanced considerations for Lambdas:

  • make batches from Kinesis bigger so you call Lambdas less (this will reduce Lambdas cost!).
  • Tune Lambdas memory settings as it will change the CPU usage and make Lambdas work better.
  • Use Kinesis Producer Library (KPL) to use the capacity of your resources better to increase throughput.

Kinesis Analytics

Amazon Kinesis Anlytics is a tool to perform operations on data received in Kinesis. The scheme above replaces the pattern we used above with “Window aggregation” code. Very elegant.

Pattern 4 — Automation

Respond to alarms / periodic jobs / auditing/ notifications

example 1 — dynamic dns assignments

we want to assign dns names to the ec2s only when they’re alive and remove the dns record when they are stopped or terminated (not running). We use cloudwatch to watch the ec2 changes and trigger a lambda that changes the Route53 dns record (creates/removes).

example 2 — image thumbnail creation from s3

our users upload images to s3. S3 triggers a lambda that is doing the resize and saves the resized image.


I just spent 3 years in a project where I did all the backend and the devops (while those tools were already production ready). It’s a shame I didn’t use it back than but i’m definitely going to use it now for my next projects.

There are still several “holes” like debugging and testing that I’m not sure are business-ready. No doubt that soon enough those needs will be answered.

I don’t like the “Lock-in” status of Serverless today by AWS. However, Amazon has shown a disrupting behavior and it doesn’t suffer from the Innovator Dilemma which means that choosing AWS is to choose cutting edge technology at all times and we can change along with market shifts led by Amazon.

I see those gaps in the serverless dev-flow. However, it’s so valuable to develop for scale with less devops, that it’s a no-brainer to use it today.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.