Creating an API using AWS Lambda w/ API Gateway — part 1

David van Leeuwen
4 min readFeb 12, 2016

--

It’s what the cool kids work with nowadays: a serverless infrastructure. It means you only need to worry about code, they say. But you feel lost once you see all the configs, tooling, APIs and services flying around your head.

If this sounds familiar, this article is meant for you. I’ve spent some time figuring it all out. It’s a novel approach to develop APIs based on Lambda. In this first article we’re mostly going to cover the basics and focus on API Gateway for the most part, as this is the starting point of your API within AWS.

What? Serverless?! 😱

Yes. It means you’re not managing servers, instances, or provisioning them. In fact, it even means you don’t need to worry about scaling anymore.

How AWS Lambda can make you feel sometimes…

Although this might be a bit overwhelming at first, you must understand that AWS is capable of creating such an infrastructure (sidenote: as they keep amazing me with all kinds of new tech, like Lumberyard). It’s based on the concept of “microservices”. Probably 90% of all current Lambda functions are used like this: create a thumbnail of an avatar that has been uploaded to S3. The idea of creating a microservice as an API is also possible, but in practical terms not easy to get done as functionality and/or code overlap multiple functions most of the time.

So what are we getting into?

AWS offers a UI where you can basically do most of the work. I wouldn’t recommend it however, as you spent most of your time like ¯\_(ツ)_/¯. But to get a basic understanding of its capabilities and features we’ll still use it in this article.

// index.jsexports.handler = function(event, context) {
try {
context.succeed("pong");
} catch(e) {
context.fail("error");
}
}

Using Lambda, you’ll end up with something like the above. Succeed can return JSON, fail unfortunately can only return a string (like the above, or from a Error object), as it transforms to {“errorMessage”: “string”} (you could also use #succeed() for failed responses, but it seems like a broken pattern to me. See context reference). To hook up the above function to a REST endpoint you’ll have to setup API Gateway. Once you’ve created the resource (e.g. GET /ping), the method consists of a Method Request, Integration Request, Method Response and an Integration Response. Let’s first take a look at the Integration Request, as this will fill up your event object within Lambda.

When using $input.params, you’ll have to add those to the Method Request in HTTP Request Headers. See Mapping Template reference for more.

Obviously some fields will be missing (which doesn’t break things though), but using this template you can get a pretty good understanding of what you can do. So when you’ve created a POST method, send it with a body, it’ll end up in event.body. And once you hit “deploy API” you can start testing it using a real endpoint. Win \o/

With the other options you can further define what the request and response should do. For instance, with the Method Response and Integration Response you can start defining error responses. In the Method Response you define all the status codes you want to handle, and within the Integration Response you can filter out the response messages using regular expressions (which is a tedious process however). For example, you can set a 500 response to check for the string: “Bad Request: .*” that would work with:

context.fail("Bad Request: something really bad happened");

Now you’ve created this endpoint, you can start creating and building out logic for you API as a “serverless microservice” (or call it what you want, it’s just f*cking rad).

Although… there are some pitfalls

Yes, it does sound amazing that you can create an API this way. But, if you have actually tried the above, you’ve probably noticed that the endpoint is really slow. The reason for this is that every Lambda function runs in a container, which has a bootup time. I don’t exactly know how they’ve handled it (as mentioned before, it is magical), but it seems that if you don’t request the endpoint that often, it’s simply just slow and once you start requesting it more often and frequently, it becomes faster. So that’s why it’s handy to set the memory of the function to its max. of 1536mb. It doesn’t cost you anything while testing, but once in production you should make it lower.

Another thing to consider is that Lambda currently runs on Node 0.10.24. Meaning, if you want to do fancy things with ES6 and stuff, you’ll have to use a transpiler or something similar like Babel. But once you get into that game you will have to look at your project structure, especially when you’re talking “microservices”.

But wait, there’s more!

Luckily there are people making our lives easier, as the developer community always (cough, aka sometimes) does. Projects like Serverless or Apex solve a few of those problems. But, I think it’s best to cover those things later on in another article. So in the next article we will deep dive into Lambda.

So what now? Well… I’d really appreciate it if you would give your feedback or tell me if you think that I have it totally wrong. However, if you’d like to hear more and/or are also into Lambda or serverless infrastructures, please ♡ the article to let me know :-)

--

--