Blueprints for up(1)

TJ Holowaychuk
4 min readSep 21, 2016

--

The latest proxy functionality in AWS API Gateway & Lambda enables some nice new capabilities, so I wanted to share the “blueprints” for a program called “up”.

EDIT: I didn’t see anyone working on this kind of project, so I started it here https://github.com/apex/up.

up(1) would allow anyone to easily provision and deploy web apps and APIs in a single command that will scale and in many cases be completely free thanks to the AWS free tier.

While I can’t afford to spent a ton of time on open-source (unless someone wants to sponsor 😏), it might be a cool little project for someone to hack on. Just because it’s on AWS doesn’t mean it has to be hard!

What would it do?

When I created apex(1) the idea was to simplify Lambda function management for use in pipelines, which is my primary use-case with Ping, however APIs and web applications are a completely different problem.

You can utilize Gateway’s Swagger support to define APIs, which does have some benefits such as per-route metrics visibility and rate-limiting, but I think plenty people just want to deploy their “vanilla” app or API.

The up(1) program would allow you to deploy any Node.js, Python, Java, or Go web server to API Gateway & Lambda for near-infinite scaling capabilities with a single command.

On top of that up(1) could easily provision SSL certificates via ACM and DNS via Route 53, among other things such as rate-limiting and caching.

How would it work?

A user would cd into their project’s directory, the directory contains a ./server file which is simply a file with the exec bit set and a hashbang line to specify the interpreter (Node, Lua, Python, etc).

This could be as simple as:

#!/usr/bin/env nodeconst http = require(‘http’)
http.createServer((req, res) => res.end(‘Hello World’)).listen(3000)

The user types:

$ up

A zip file is created in-memory and uploaded to S3. A CloudFormation stack is created to set up the initial API Gateway, Lambda, Route 53, and ACM configuration.

The API Gateway and Lambda combination are configured in such a way that all paths and all methods are passed to the Lambda function, aka the new proxy mode.

The Lambda function is a shim managed by up(1), upon boot it runs the ./server via `child_process.spawn()`. There’s nothing that stops you from binding to a unix domain socket or tcp port in a Lambda function’s container, so you can forward requests to the ./server once it is listening.

This would also serve as a convenient abstraction for injecting middleware for tracing, metrics, custom logging and so on without touching the user’s code.

To deploy a new release the user simply types the same command:

$ up

Rollbacks, environment variables and so on would also be easy to provide, just as apex(1) does.

$ up rollback

That’s really all there is to it, pretty straight-forward project to implement but it would effectively remove all the cognitive overhead of AWS for simple API and Application use-cases.

In the case of Go, Rust, and others which produce binaries, you could provide build hooks like apex(1) does in order to produce the ./server binary, or infer that the project is Go to make this even easier.

Since API Gateway and Lambda are available globally, it would be ideal to provide an option to deploy to each AWS region for improved latency, though the benefits of this would be hindered if you don’t have regional replicas for any data stores in your app.

Ideally of course written in Go (or similar) so you don’t need to install a package manager just to get started, or wait 5 minutes while npm installs the command.

AWS service integration

Applications and APIs don’t stop at the server of course, you need databases, queues, mail services, logging and so on. With the use of CloudFormation up(1) could also spin up RDS, Elasticsearch, and other services in a simple declarative manner, and provide those end-points as env variables.

CloudFormation has come a long way, earlier this year they released “Change Set” support which brings its functionality more inline with Terraform. This means you could easily dump any potential changes to your stack before up(1) goes ahead with the change.

One of the biggest issues with navigating the AWS ecosystem is dealing with IAM. A simple declarative dependency graph could effectively eliminate the need to manually specify IAM policies and roles.

This is where I feel Terraform falls short as well, its modules are currently not powerful enough to hide this information, and ideally you want to describe the relationships between services, not mundane and error-prone permission access that is required by anything in AWS-land.

Thoughts

I’m still actually quite surprised that AWS doesn’t just allow pulling from their Docker registry to deploy containers behind API Gateway, but maybe that will come in the future. This technique would ultimately still suffer slightly from Lambda’s limitations, but being able to run Go, Node, Java, Python, Lua, and Rust covers most cases.

This of course won’t give you the ultimate flexibility that deploying to K8 or ECS would, no locality for micro-services, no private networking, however for many simple apps and APIs this would fine.

ACM is not supported in API Gateway yet, so that’s actually the missing piece here, but it’s close :p.

--

--