Building Lambda-based Microservices

LifeOmic has adopted a serverless microservice architecture since our inception. We have used this approach to build a broad set of HTTP-based APIs that support our Precision Health Cloud (PHC), our LIFE Fasting Tracker and LIFE Extend apps for mobile devices, and our JupiterOne security product.

Jeff Jagoda
Life and Tech @ LifeOmic
5 min readJul 31, 2018

--

At LifeOmic, we very quickly aligned on using AWS (Amazon Web Services) Lambda and API Gateway (APIG) as the primary technologies for implementing our microservices. We’ve documented this approach in our architecture overview. APIG is a fantastic tool for building HTTP-based services on top of a variety of backend AWS services, including Lambda. Lambda provides an incredibly simple and economical compute layer that scales as we need it.

Lambda isn’t specific to web services. The Lambda programming model takes generic event objects as input and produces generic objects as output — there’s nothing specific to HTTP baked into the runtime. Most of our services at LifeOmic are built using Node.js and the Node.js community has provided some great tooling for building HTTP services with Lambda. In particular, we use Koa and serverless-http to parse the HTTP events from APIG and to produce responses that APIG understands. This allows us to write Lambda service code in almost exactly the same way that we would develop a traditional web service using Koa and Node hosted on AWS EC2 or ECS.

As our platform grew and inter-service dependencies began to emerge between our backend services, we noticed a gap in the tooling around inter-service communication. If Service A needed to make a request to Service B, the options that were available were to either use an HTTP client to hit the public APIG interface (over the Internet) or use the AWS SDK to directly invoke the Service B Lambda function (using lambda.invoke()). Routing over the Internet wasn’t appealing to us because it meant that we would need to expose all of our APIs (including those that are only needed for private interactions) on the Internet and incur an additional hop for each request. Using the AWS SDK was also not appealing as it would tightly couple our upstream code to the AWS SDK and force an unnatural development experience/model onto our developers. What we really wanted was an HTTP client that could dispatch requests directly to our Lambda microservices. This led us to create alpha.

An HTTP Client for AWS Lambda

When setting out to build alpha, we didn’t want to completely re-invent the wheel. After all, HTTP is a rich protocol with lots of features and edge cases. Ideally, we would only need to map a few HTTP concepts to Lambda concepts and then let some existing HTTP client handle the rest of the protocol details. The main concept that separates Lambda invocations from HTTP requests is a URL structure that adequately describes the Lambda invocation. This problem was a pretty straightforward one to solve.

We use the lambda:// scheme to differentiate a Lambda invocation from an HTTP request. The host name is the name of the function that we want to invoke. We use the port designator to indicate function qualifier (this is particularly useful for facilitating blue-green Lambda deployments). And the path and query parameters are directly translated into the corresponding APIG event attributes.

lambda://<function name>:<function qualifier>/path?query

In order to avoid writing a full HTTP client from scratch, we needed to find an HTTP client that we could extend with just our Lambda URL scheme. Axios provided just the client we needed. Axios is a JavaScript HTTP client that has native promise support and is built using an adapter-based architecture. This adapter architecture allows Axios to implement the bulk of the HTTP protocol in the core client and then adapt it to either a server or browser environment. It also makes the client extremely customizable. By extending the Axios client with a custom adapter, we are able to invoke a Lambda function rather than dispatch a traditional HTTP request. A nice side effect that comes from the adapter model with a custom URL scheme is that the same client instance can be used to transparently make requests to different types of downstream services — the only difference that the caller sees is in the URL. In other words, when a lambda:// scheme is seen by the client the alpha Lambda adapter is used and when the http:// or https:// schemes are encountered the HTTP adapter that ships with Axios is used.

Testing Lambda Handlers

While using an Axios adapter to delegate HTTP requests to the AWS SDK solves the problem of invoking downstream services in a deployed environment, it doesn’t provide any way to exercise a Lambda service in a local test environment. A Lambda function is just a plain JavaScript function from the perspective of an automated test framework. While this means that it is extremely easy to invoke a Lambda function as part of a test, it also means that without additional tooling, the caller must know how to construct APIG style HTTP event payloads as well as how to parse and evaluate APIG style response objects. This is doable, but tedious. Rather than have to deal with this issue, we created another Axios adapter that is optimized for the local test environment. This adapter shares logic with the AWS SDK adapter, but rather than invoking the AWS APIs, our test adapter directly invokes a Lambda handler function. This abstraction provides an experience similar to what supertest provides for traditional services. The only difference is that rather than using the superagent client API, alpha uses the Axios client API.

The Benefits of Using HTTP with Lambda

Creating a client that leverages a URL abstraction for Lambda invocations has paid numerous dividends for us at LifeOmic. This is mainly because almost all other web service tooling understands the HTTP protocol. By wrapping the alpha client in standard API interfaces, we have been able to integrate with a large ecosystem of existing tools. Examples include the alpha-cli that provides a Curl-like CLI interface for alpha, and axios-fetch, which provides the standard Fetch WebAPI interface for axios/alpha client instances.

Perhaps the biggest payoff for us came when we began developing out GraphQL endpoints to support our mobile applications. A single GraphQL endpoint is pretty simple to implement using community tools like the Lambda or Koa adapters that ship with the Apollo server framework. But at LifeOmic, we are obsessed with microservices. This adds a twist.

Each of our backend services exposes a single GraphQL endpoint that exclusively owns a small discrete number of data schema types for which the micro-service can answer. We then take full advantage of the Apollo linking and schema stitching features to implement a GraphQL proxy that sits in front of all of our GraphQL services. This allows us to present a single GraphQL endpoint to our consumers that supports an incredibly wide range of relationships between types and services. Using axios-fetch with the alpha client allows us to point Apollo links at our backend Lambda services without routing out to the Internet. After that, everything is just business as usual.

Jeff Jagoda is a developer on the mobile backend services team at LifeOmic. We love solving difficult problems while building services to support our mobile health tracking LIFE apps. Feel free to contact me on LinkedIn to learn more.

--

--

Jeff Jagoda
Life and Tech @ LifeOmic

I like building things. Software by day. Hardware by night. Legos on the weekend.