Go & Amazon Lambda

Your Next Great API?

Tom Maiaroto
Serif & Semaphore
4 min readOct 10, 2015

--

Have you heard of Amazon Lambda? It seems like it’s making big waves lately. Of course I’ve been following it and AWS re:Invent just happened, so I could be exaggerating.

No sweat if you haven’t heard of Lambda. It’s still a new, yet stable, service and I promise you’ll likely be among the first to preach it at your next water-cooler stop.

Lambda is a way to run code (Node.js, Java, or now Python) without a server. Like a RPC in Amazon’s cloud. You can call it through Amazon’s SDK or other services such as AWS API Gateway. You can control access with IAM, access other AWS services and so on.

In fact, this is one way to go about setting up an API and that’s a big win for Lambda. Here’s more about Lambda + API Gateway and how it’s awesome.

Getting it to Go

I said it ran Node.js, Java, and Python…Yet the title here includes Go. So…How?

Basically, to run Go in AWS Lambda you need to compile a binary for Amazon’s server architecture (linux, x86_64) and call it from Node’s child_process.spawn command. All of your code is zipped up so your binary just sits along side the JavaScript file (this ends up being on S3). Here’s some instructions (note the Gist that provides a better way to do it).

Performance Considerations

There’s a catch — It’s a little slow. The duration is all over the map when testing from the console (terrible way to benchmark), but it’s not uncommon to see 200–300ms for the simple example above.

Subsequent runs did show, an expected, dramatic improvement. I’d have to average it around 10ms based on what I saw (your milage will vary). There are “cold” and “warm” starts with Lambda and, if using API Gateway, there is some caching on top of that.

It’s important to note that with Lambda you are billed for duration, but it’s rounded up to the nearest 100 in milliseconds. So pricing isn’t a real concern here just yet.

However, depending on your needs, this may be less than desirable. Us gophers like to measure things in nanoseconds after all. We’re a bit spoiled that way.

How’s that compare to “native” Node.js Lambda? I’ve seen the same kind of simple Node.js examples take less than a millisecond (after the initial invocation).

That’s a pretty dramatic difference, but it’s important to keep in mind that what happens after the Go process is spawned can be faster than what would happen if Node.js was used for the same task instead…But your simple “hello world” is going to be slower.

All I’m trying to look at here is invocation/process spawn speed. However, this up front penalty is a legitimate concern for some applications.

Harder, Better, Faster, Stronger

Can we do better? Yes, we can actually. I stumbled upon this lambda_proc package on GitHub. What it does is basically keeps a Go process around to handle multiple invocations. This greatly increases performance in terms of that up front hit. You can have the simple example run under 1ms quite frequently (after the first run).

How’s it stack up to a simple Node.js Lambda? Really well as it turns out. A simple Lambda example with Node.js took about 0.30–0.60ms while the simple Go process (which decoded JSON by the way) took about 0.60–0.80ms. That’s a very tiny penalty now!

Keep in mind trying to “benchmark” from the AWS console is not exact. Though it gives you a rough idea and removes all doubts and concerns in my mind about speed.

Spawning a Go process in Lambda isn’t really costing you much extra. It nearly matches the invocation speed of a Node.js Lambda.

A Different Perspective on Microservices & APIs

I love the speed, memory usage, multi-threading and then parallel processing aspects of Go. I also love pairing that with a micro/services based architecture to really benefit from the parallel processing.

Koding’s Kite project is always a great example for microservices in Go. It communicates by passing JSON over HTTP. Go with libchan is another way to go.

However, using AWS Lambda and API Gateway really gives us yet a different option when building microservices exposed through an API. It eliminates a lot of concerns and maintenance. You don’t need to worry about hosting in the same way. It becomes easier in some regards.

There’s certainly a lot of, perhaps more, configuration though. However the JAWS framework helps mitigate a lot of that, making the construction and deployment of your API a snap.

Lambda and API Gateway makes it easy to maintain and version your API too. It also gives us the opportunity to save a ton of money.

JAWS is a great framework and tool for Lambda. Do check it out https://github.com/jaws-framework/JAWS

Another important addition announced at re:Invent is that now Lambdas can run on a schedule (like cron). This solves a huge architecture problem when using Lambda. You can then use many other AWS services with Lambda in some clever ways to solve other challenges (streams of data, sessions, e-mail, etc.).

I think Lambda is a great choice for an API because your overhead is now a 1:1 ratio with requests to it. You can easily and reliable calculate costs. Unlike traditional web application servers where it’s much harder to estimate resource usage. You need failover, even for one customer, and you need a certain number of customers before you’re profitable. You have to spend money to make money, isn’t that how it goes? Well, no actually.

With Lambda you’ll only spend money when you know that you’re going to make money.

I hope Amazon adds official support for Go to Lambda soon. It’ll just make things even easier and slightly faster. I think Go and Lambda is a match made in heaven. Fortunately we don’t need to wait thanks to these workarounds in the meantime.

See https://github.com/tmaiaroto/go-lambda-geoip for an example that you can try yourself.

Update: Read how I built a serverless AWS Lambda Golang API in seconds using Aegis.

--

--