An Experiment in Serverless

Karim Varela
Safara Engineering
Published in
5 min readJul 13, 2018

First, a picture for those who just want to see the architecture:

I’m currently designing our backend platform at Roam and working under the following constraints:

  1. I do not have a devops person and am not planning on having one for many months.
  2. I personally am mediocre at best as a devops engineer and want to minimize the time I, or anyone else on the team, need to spend building, configuring, managing, and monitoring infrastructure.
  3. I want a system which scales automatically with load.
  4. I don’t want to spend too much money as every dollar is precious this early.

On top of what to build our infrastructure is an important decision, with implications that could extend far into our young company’s future, so I spent a couple days doing research into different cloud offerings (mostly AWS), different PaaS offerings (mostly Heroku), and how painless (or painful) it is to get up and running with different offerings.

In the end, I was really deciding between 4 options, and I’ll detail the pros and cons of each below:

  1. Heroku
  2. Docker containers running on AWS Fargate
  3. AWS Elastic Beanstalk
  4. Serverless, utilizing AWS Lambda

Heroku

Pros:

  • Slick, easy to understand and manage UI/UX.
  • Great command-line tools for deployment, logging, and management of your resources.
  • Great integration with Github to automatically deploy from merges to git branches.
  • Auto-scaling based on response times.
  • Easily spin up development environments for testing.

Cons:

  • Expensive: Anywhere from 3–5 times more expensive than comparable resources on AWS.
  • No serverless capability: Their concept of dynos are like containers, in the sense that they are a memory and compute virtualization, but you still need to specify these parameters and experiment with them.
  • Dynos can still take up to a few minutes to start up, especially if they need to check out code from git and do any preprocessing or compilation.
  • No granularity in security controls: For most companies, including Roam, this probably isn’t an issue, but for my last company, which was in Fintech, this was the reason why we left Heroku and moved to AWS.

AWS Fargate

Pros:

  • Don’t have to manage EC2 instances: Similar to Heroku dynos, you just specify how much memory and compute power you want in each container, and AWS manages the infrastructure.
  • Fairly easy to deploy from command-line.
  • Containers spin up quickly, especially if you bake all your dependencies into the Docker image.
  • Really granular control over auto-scaling rules and parameters.
  • Credits: Through a partnership with our accelerator, Seedcamp, and AWS, we will be able to potentially use AWS for free for a year.

Cons:

  • Still need to manage memory and CPU requirements for your containers, and still need to experiment with different auto-scaling and monitoring / alerting criteria in your app.
  • Dashboard is not intuitive: You have to learn the concepts of ECR (Elastic Container Registry), ECS (Elastic Container Service), Docker containers, tasks, and task definitions. It takes some time just to wrap your head around how everything is supposed to work together. It’s also difficult to immediately tell which version of your container is currently running in a given task.
  • No out of the box integration with Github: At my last company, where we used Fargate, we had to build custom scripts to build our Docker image and then deploy it to AWS.
  • Quite a bit more expensive than traditional ECS, which it sits on top of (this is hearsay, but I just read it on the internet so it must be true)
  • AWS logging is inherently more difficult to manage than Heroku, but this is a drawback to using AWS in general since no matter which AWS service you use, you’ll need to use their logging capabilities.
  • Still must operate in a VPC

AWS Elastic Beanstalk

Pros:

  • A PaaS (Platform as a Service) offering, like Heroku: Beanstalk will set up your entire environment for you, from grabbing code from Github, to creating a VPC, to setting up auto-scaling instances, to setting up a persistent datastore.
  • Easy to use dashboard (I haven’t personally tested Beanstalk, but this is what I hear)
  • No extra cost; all you pay for is the actual AWS resources you use (EC2 instances, S3, data ingress / egress, etc…)
  • Easy ability to roll-back
  • Credits: Through a partnership with our accelerator, Seedcamp, and AWS, we will be able to potentially use AWS for free for a year.

Cons:

  • Still need to experiment with different auto-scaling and monitoring / alerting criteria in your app.
  • Still running on EC2 instances, which can be extremely wasteful, especially for single-threaded languages/frameworks, like Node.js or Python.
  • Still need to operate in a VPC

Serverless (AWS Lambda)

Pros:

  • Don’t need to worry about scaling at all. Each request is essentially handled by a new Lambda function.
  • Pay per execution: If nobody is using our system, we don’t pay anything.
  • Credits: Through a partnership with one of our investors, and AWS, we will be able to potentially use AWS for free for a year.
  • AWS is releasing soon Aurora Serverless, a serverless database offering, which should fit in nicely with Lambda. They already have Dynamo DB, which is semi-serverless, but I am still generally mistrustful of NoSQL DBs, and we will have tons of data in our system which will be best modeled by a relational DB. We used Aurora at my last company and we found it very easy to manage, but again with traditional Aurora, you still pay for it regardless of it’s being utilized.
  • Toolset matured: My biggest concern with serverless was our ability to efficiently code and test locally. When serverless was in it’s infancy, there was no good way to test locally. Now, things have changed. There’s the Serverless framework, which provides great testing and deployment tools, and AWS SAM (Serverless Application Model) as well. Both frameworks allow you to define your functions in code (yaml) and test locally. The Serverless Framework is great also because it’s vendor agnostic, helping overcome some people’s fear of vendor lock-in when using serverless functions.
  • Can potentially operate without setting up a VPC, since functions themselves aren’t addressable things. Although, I think if I want to use Aurora, I’ll still need to set up a VPC, at least until Aurora Serverless is ready for prime time. Good thing I just learned, however, is that the default AWS setup out of the box now includes a VPC.

Cons:

  • Cold start problem: It can take up to 800ms to run a Lambda function if it hasn’t been executed in a while. There are ways around this (e.g. pinging your function to keep alive), but this is a somewhat irrelevant issue for us as our API will be hit very often once we’re at scale.
  • No fancy, graphical, bundled deployment / roll back utilities in AWS: You have to stitch together CodePipeline and CodeBuild resources, which probably isn’t too difficult, but I don’t have experience with, yet. There are actually Serverless Framework commands to do this so it shouldn’t be too bad.

Serverless for the win!

So, as I’m sure you’ve guessed, I’m going with serverless. Here’s what my architecture looks like so far. I welcome any feedback:

--

--