Serverless Showdown: AWS Lambda vs Firebase Google Cloud Functions

June 8 Update: Jason Polites from Google (https://www.linkedin.com/in/polites/) helpfully clarified a couple issues around my analysis of Google Cloud Functions. These updates are added in line with the article text below. Thanks Jason!

If 2016 was the year of microservices, 2017 is shaping up to be the year of serverless computing, most notably through AWS Lambda and Google Cloud Functions created through Firebase.

Cloud Functions for Firebase were announced a month ago, bringing them into direct competition with AWS’s offerings. This, of course, inevitably invites benchmarks and comparisons between AWS’s and Google’s offerings. Let’s walk through the two.

Wait, what is serverless computing?

Ah, the requisite explanation.

Traditional backends have been created using monolithic servers, where a single server may have several different responsibilities under a single codebase. Request comes in, server executes some processing, response comes out. The same server might be responsible for authentication, handling file uploads, and keeping track of user profiles. The key mechanic is that if two different requests come in for two different resources, it gets handled by a single codebase. This server might run on dedicated or virtualized machinery (or several machines!), and persistently runs over the span of days, weeks, or months.

More recently, we’ve seen the introduction of microservices as a popular architectural decision. With a microservices approach, there are still distinct servers, but many different servers, which of which handles a single purpose. A single service might be in charge of user authentication, and another one may handle file uploads. Microservice architectures are characterized by many separate codebases and incremental deployments of each individual service. The idea here is that a service which isn’t modified often is less likely to break, along with providing a more logical separation of responsibilities. Like monolithic deployments, microservices are traditionally long-running processes being executed on dedicated or virtualized machinery.

Finally, serverless architectures. Think of them as a natural evolution or extension to microservices.

This is a microservice architecture driven to the extreme. A single chunk of code, or ‘function’ is executed anytime a distinct event occurs. This event might be a user requesting to login, or a user attempting to upload a file. These functions are traditionally very short running in nature — the function ‘wakes up’, executes some amount of with a duration of 10 milliseconds to 10 seconds, and is then terminated automatically by the service provider. No persistence, no dedicated machinery — in effect, you have no idea where your code is running at any given time. The benefit to serverless architectures shares some of the benefits of a microservices based approach, where each function has some distinct responsibility and logical separation.

The Test App

To compare the two services, I wrote a small React Native application with the intent of providing one-time-password authentication.

Rather than expecting a user to enter a tedious email and password combination, the user is expected to enter just their phone number. Once we have their phone number in hand, we generate a short six-digit token then text it to the user via SMS. The user then enters the code into our app, after which we expect them to enter the code back into our app. If they enter the correct code, great, they are now authenticated.

Given that the code is the key authenticating factor, its something that clearly shouldn’t be generated or stored directly on the user’s mobile device. Instead, we should generate and store the code somewhere else, somewhere that the user doesn’t have any type of read access to. Enter our serverless functions!

Its always important to plan out the different cloud functions that will be created. In this case, I see three clear phases of the login process where some amount of logic must be executed in a secure environment:

  1. Create a new user (sign up)
  2. Generate, save, and text a new login code (sign in)
  3. Verify a login code

Each function we create is assigned a unique name, usually to identify its purpose. I followed a simple nomenclature, opting for ‘createUser’, ‘requestOneTimePassword’, and ‘verifyOneTimePassword’.

With these three functions in mind, let’s walk through the deployment process

Function Creation — Lambda

Creation of functions with Lambda can take two forms, either direct access of the Lambda Console or through the Serverless framework. I chose to use the Serverless framework, as it made deployment (later) much easier.

Serverless encourages centralizing all configuration of your functions into a single YML file. The YML file requires the function name as it will be displayed on the Lambda console, the name of the function in your code base, and some configuration on when to execute the function. In our case, we wanted to execute the function on an incoming HTTP request with a method of POST.

Here’s the relevant snippet of config from the YML file for creating a new user:

functions:  
userCreate:
handler: handler.userCreate
events:
- http:
path: users
method: post
integration: lambda-proxy
cors: true

One of the interesting aspects of AWS Lambda is that it is truly built assuming that you’ll have any type of event driving a function invocation, not just an incoming HTTP request issued by a client device. Other valid triggers might be a file upload to S3, or a deploy to some other service on AWS. Even though its clear to you and me that we only want to run the function with an incoming HTTP request, we still have to be awfully explicit.

I found writing the actual function to require a little more boilerplate than I’d like:

const firebase = require('./firebase');
const helpers = require('./helpers');
const handleError = helpers.handleError;
const handleSuccess = helpers.handleSuccess;
module.exports = function(event, context, callback) {
const body = JSON.parse(event.body);
  if (!body.phone) {
return handleError(context, { error: 'Bad Input' });
}
  const phone = String(body.phone).replace(/[^\d]/g, "");
  firebase.auth().createUser({
uid: phone
})
.then(user => handleSuccess(context, { uid: phone }))
.catch((err) => handleError(context, { error: 'Email or phone in use' }));
}

You will notice a reference to firebase in here; I am still using Firebase for user management, even though the app is hosted on AWS infrastructure.

Yep, the request body has to be manually parsed. You’ll also notice that I made some ‘handleSuccess’ and ‘handleError’ helpers, to avoid some otherwise awful boilerplate. Here’s ‘handleSuccess’:

handleSuccess(context, data) {
context.succeed({
"statusCode": 200,
"headers": { "Content-Type": "application/json" },
"body": JSON.stringify(data)
});
}
}

Again, don’t expect Lambda to handle JSON encoding or decoding for you, this is all manual.

Function Creation — Google Cloud Functions

Project creation with Cloud Functions was clearly easier. Its clear that the managers around this project assume that the most common use case is handling incoming HTTP requests, so there wasn’t a tremendous amount of configuration to route a particular event to a particular function.

Generation of the initial project was done by using the firebase CLI, which I hadn’t been previously familiar with. The CLI generates an entire Firebase project, which allows hosting important configuration like your security rules in a VCS, rather than relying entirely upon the console rule editor.

Definition of the functions took place inside of a Javascript file, where each export is essentially assumed to be a deployable function. For example:

exports.createUser = functions.https.onRequest(createUser);

The actual function creation was far more straightforward.

const admin = require('firebase-admin');
module.exports = function(req, res) {
if (!req.body.phone) {
return res.status(422).send({ error: 'Bad Input' });
}
  const phone = String(req.body.phone).replace(/[^\d]/g, "");
  admin.auth().createUser({ uid: phone })
.then(user => res.send(user))
.catch(err => res.status(422).send({ error: err }));
}

Fans of Express JS will immediately be at home with the req, res function signature. The request and response objects use an identical API to Express’, which makes for a straightforward learning curve. Also notice no need for complicated boilerplate around handling responses.

Winner: Google Cloud Functions

Creating functions with Firebase is a clear winner. There’s less upfront configuration required, along with a far more palatable API. Of course, the caveat is that Firebase’s amount of configuration is smaller because there are fewer function triggers available on Firebase. No need to specify that a function should be executed on an incoming HTTP request when there are only six different ways of triggering them

Deployment

Certainly not much to say here, as the deployment process is nearly identical on both platforms. Having set up the initial project with Serverless, deployment on the AWS side was as easy as a terminal command:

serverless deploy

Firebase deployment was similar by using the Firebase CLI

firebase deploy

In both cases, the time from initiating the deployment to seeing the function go live was about forty seconds. Nothing to lose sleep over.

Winner: Tie

Testing — Lambda

If function creation was easier on Firebase, I can confidently say that testing your functions in a staging environment is far easier on AWS.

For the above project, I spent around two hours from start to finish on AWS, whereas the same exact project took around five hours, simply because of of the atrocious debug cycle. It all comes down to the presence of a simple tool on the AWS side — the beautiful blue Test button.

Once your function has been deployed, you can create a ‘test’ event, by manually creating a request to be sent directly to your function. In this case, I wanted to manually test the creation of a new user by providing a unique phone number. Using one of the sample templates, I manipulated the body of the request to include a phone number, then saved the test event.

Once your test event is created, that beautiful blue Test button will execute your function instantaneously and immediately show output from the execution in plain text, including not only the function’s request response, but also any log output coming from the function.

Testing — Google Cloud Functions

June 8 update: There is a testing mechanism for Cloud Functions, but it’s not (currently) available in the Firebase console. If you access the “Cloud Console” (https://console.cloud.google.com) you’ll see Cloud Functions there with a range of capabilities, including quick testing. There is also a local emulator which allows you to debug functions locally, and Cloud Platform also has a (free) Cloud Debugger which actually lets you put a breakpoint on live code!

Original writeup: Let me be clear: manual testing of Cloud Functions is a pain, stemming from two aspects:

  1. Cloud Function’s don’t have a built in testing solution with a quick feedback mechanism as AWS does
  2. Getting logs to the Firebase console usually involves waiting for about one to five minutes

To the first point, manual testing of Cloud Functions revolves around your favorite HTTP request utility, be it curl or Postman. If your function fails to execute due to some hidden typo, rest assured that you’ll get a 50x status code without much more information, rather than any helpful debug output.

If you do want to get information out, you’ll be using Firebase’s Function console.

At the console, you’re limited to seeing only logged information, as opposed to AWS’s console which shows both log statements and function response bodies.

But the biggest gripe I have is how long it takes to see logs appear here. With stopwatch in hand, it would take one to five minutes of waiting to see any log information pop up from a single request. That terrible feedback loop lead to a lot of confusion as I tried to keep the order in which I’d execute test requests in mind. Let’s face it; when you have a long feedback loop like that, you may immediately execute one to five manual tests, then try to decipher the output you receive a few minutes later. Not fun.

Winner: AWS Lambda

Pricing

In general, you can count on paying for function invocations based on two metrics: the number of invocations, and the amount of time each invocation takes to execute, modified by the hardware that the function is executed upon.

June 8 Update: I have neglected to include Amazon’s API Gateway price, which is $3.50 per million requests and is necessary if you want to have HTTP invocation of the function. Cloud Functions includes this for no extra charge. So the 19,193,857 requests you quoted for AWS would actually cost ~$65, not $1, which is a pretty large difference.

Original: At the time of this writing, Cloud Functions cost $0.40 per million invocations (after two million that are free), while Lambda clocks in at $0.20 per million invocations (after one million that are free).

Execution environment refers to the hardware that is used to run the function. More powerful hardware, more cost. Its a bit of an exercise in engineering economics, however. If you’re running a computation heavy function that takes some non-zero amount of time to execute, you might think to use a less powerful machine, as it costs less money per millisecond of execution time. But its a double edged sword; the slower the machine, the more milliseconds you’re spending! I’d love to do some followup work to figure out the sweet spot in machine size for compute-heavy tasks.

Google Cloud Function’s invocation time pricing is a function of the CPU plus RAM size, whereas AWS is a function of the RAM size only.

For example, a function that takes 100ms to execute on a 256mb memory machine with a 400mhz cpu would cost the following on Google :

  (256mb/1024(gb/mb)) * .5s * $0.0000025 gb-s 
+ (400mhz / 1000 ghz/mhz) * .5s * $0.0000100 gb-s
= $0.0000003125 + $0.000002
= $0.0000023125 per request

Or, put another way, you’d get 432,432 requests for $1 on Google, not including the free tier or flat cost of invocation.

On AWS Lambda, a similar setup would cost

(256mb/1024(gb/mb) * .5s * $0.000000417 gb-s
= $0.0000000521

Or, put another way, you’d get 19,193,857 invocations for $1, not including the free tier or flat cost of invocation. A factor of four, really? Someone check my math, please.

Winner: AWS

Conclusion

At this point, AWS Lambda is head and shoulders above Google Cloud Functions. The testing cycle feels much tighter, and the pricing is currently no-contest. Function creation is a bit easier with Google Cloud, but as soon as you get that boilerplate down you’re good to go.

Officially, Google Cloud Functions are still in beta, so we might see price reductions at some point in time, or better tooling, but for now I can’t help but point friends over to AWS Lambda.