Faster Deployment — AWS Lambda Hot Deploy with Parcel

This is about a experiment I tried for packaging a set of AWS Lambda services using Parcel which are reloaded at runtime 🚀

James Robinson
6 min readMar 6, 2020

Github: https://github.com/jamesjessian/lambda-parcel-hot-deploy

Redeploying AWS Lambda instances or Google Firebase Functions is a slow process.

In the grand scheme of things, redeploying a typical set of services probably takes less than five minutes, so it’s not a big deal. Of course, our testing is so excellent that we don’t even need to deploy it more than once, knowing full well that it is perfect and needs no further changes whatsoever. But when we just want to redeploy something quickly without waiting for some continuous deployment process to pick it up and re-instantiate all those “serverless” servers so that we just fix that one little typo to give ourselves one small piece of joy from confirming that we’ve actually made something that finally works this time… those minutes are long

📦🚀⚡

The excellent Parcel bundler offers the ability to bundle a package of code (including dependencies) to run in a Node environment (--target node). So I wanted to try deploying a Node “bundle” to an S3 bucket (which is quick) and allow my deployed Lambda instances to pick it up.

This probably isn’t something you want to use in a production environment, as it will add to your request/response and cold-startup time (although a lot of this could probably be mitigated into insignificance) and it relies on one big eval() statement in there somewhere, which is generally considered dodgy.

But it does work. Here is an example.

Deploying a simple service

If we were desperate for an HTTP web-service that provided us the current date and time in London in this sort of format: March 7th 2020, 4:46:28 pm, we may choose to use the Serverless Framework ⚡ to deploy the following to AWS Lambda…

lambdaHandlers.js contains our Lambda function:

const routes = require('./routes')async function getTime(event, context, cb) {
// This is our service that returns/does something:
const simpleHttpHandler = routes.getTime
// This is a helper function to get the body/pathParams out of
// the request, call our service, and return an HTTP response in
// the right format:

return simpleLambdaHandler(simpleHttpHandler, event, context, cb)
}

routes/getTime.js contains the actual service code:

const moment = require(‘moment’)function getTime() {
return { time: moment().format(‘MMMM Do YYYY, h:mm:ss a’) }
}

Create a serverless.yml to declare the endpoint:

getTime:
handler: lambdaHandlers.getTime
events:
- http:
method: GET
path: time
cors:
origins:
- '*'

$ npm install
$ serverless deploy

Success. You win.

$ curl https://blahblahblah.amazonaws.com/dev/time
{"time":"March 7th 2020, 4:46:28 pm"}

But now that date format no longer meets the requirements…

It can’t say March 7th 2020 — it should be 7th March 2020!

-  return { time: moment().format('MMMM Do YYYY, h:mm:ss a') }
+ return { time: moment().format('Do MMMM YYYY, h:mm:ss a') }

I have to spend at least one full minute of my life waiting for a redeployment? For a one-line change. Outrageous.

Hot Deployment

In order for our Lambdas to just pick up some new code and keep on running, we will:

  1. Package our services/routes into a single JS file with Parcel
  2. Upload that into a private S3 bucket
  3. Get our Lambdas to check that bucket for changes
  4. Grab and interpret new code in place of the existing services

Build

Packaging is simple. All of my routes live in a folder called /routes and I have a single /routes/index.js to bring them all together:

const getTime = require('./getTime')module.exports = {
getTime,
}

Install Parcel:

npm install --save-dev parcel

Add a script to your package.json which will run Parcel using routes/index.js as its entry point. This will generate our packaged bundle of code: dist/index.js

"scripts": {
"build": "parcel build routes/index.js
--target node
--bundle-node-modules
--no-source-maps
",
...
}

Where we’re going, we don’t need source maps. I don’t think so anyway.

Running this script (npm run build) will give me a version of routes/index.js that has the entire tree of sub-modules packaged within it. For example, I used the Moment.js library, so the generated packaged file dist/index.js now contains my code along with all of that Moment.js code, so that the node_modules folder isn’t needed. It brings in the code for every require() or import.

Technically I’m only using tiny fraction of the Moment library, which makes it a bad choice — it doesn’t let me import only the modules I need. The date-fns library is more modular, or I might not need Moment at all, but I needed something of a decent size to demonstrate the concept. Parcel has experimental tree-shaking support to strip out code from external libraries that isn’t being used, but it isn’t stable yet.

S3 Bucket

I’ve added this to serverless.yml to create a new S3 bucket:

resources:
Resources:
HotSourceBucket:
Type: AWS::S3::Bucket

I’ve not given the bucket an explicit name (they have to be globally unique and I am unimaginative) so it will be automatically generated. In order to capture the generated bucket name so that we can use it later, I can export it. Here’s how I’m using the serverless-stack-output plugin to do that…

  1. npm install serverless-stack-output
  2. Add this at the root level in serverless.yml:
plugins:
- serverless-stack-output
custom:
output:
file: .build/stack.json

3. Also, add this to serverless.yml within our resources:

Outputs:
HotSourceBucketName:
Value:
Ref: HotSourceBucket

When we next use serverless deploy we’ll get the name of our new bucket in ./build/stack.json.

We can now write a script to upload our packaged dist/index.js into that bucket. It’s not very interesting. You can see it here: utils/uploadSource.js

We now need a function that will grab that source file and eval() within the Lambda instances: utils/getJSFromS3.js

It’s basically this…

// Get JavaScript file from S3 and eval it
const data = await s3.getObject(s3Params).promise()
const jsSource = data.Body.toString('utf8')
const result = nodeEval(jsSource, './' + sourceFilename)
return result

The full source does some simple LastModified check/cache so that the module isn’t downloaded again unless it has changed, to save some milliseconds.

So now our main Lambda handler function can be changed to look like this:

async function getTime(event, context, cb) {
const bucketName = process.env.BUCKET_NAME
const routes = await getJSFromS3(bucketName, 'index.js')
const simpleHttpHandler = routes.getTime
return simpleLambdaHandler(simpleHttpHandler, event, context, cb)
}

In order to populate that process.env.BUCKET_NAME environment variable with the name of our new bucket, we need this within the provider section of our serverless.yml:

environment:
# Get the ARN for the bucket that we have created
# e.g.“arn:aws:s3:::packaged-lambda-test-dev-hotsourcebucket-xxxx”

BUCKET_ARN: !GetAtt HotSourceBucket.Arn

# Get the name of the bucket we have created
# e.g. “packaged-lambda-test-dev-hotsourcebucket-xxxxxx”
BUCKET_NAME: !Ref HotSourceBucket

And in order for our Lambda function to be allowed access to the contents of our new bucket, we need something like this:

iamRoleStatements:
# Grant privilege to access S3 bucket
- Effect: Allow
Action:
- s3:GetObject
# ARN for bucket followed by /* = all objects within bucket
Resource: !Join ['', [!GetAtt HotSourceBucket.Arn, '/*']]

So now we can redeploy the whole stack (serverless deploy) and redeploy our routes whenever we like (npm run build && node utils/uploadSource)

Deploying changes to the source code takes me 4.3 seconds, compared to 1–2 minutes previously, and this is a single function; Deploying a whole set of services for larger projects generally takes over 5 minutes.

There is obviously a small overhead to the service itself, with the HEAD/metadata check. This could be reduced by only checking if the source is XX seconds old or whatever suits your needs, but even just for development, it is great to be able to quickly redeploy services into a production-comparable environment rather than a local simulator or suchlike.

Here’s the full source code: lambda-parcel-hot-deploy

Note that technically we can move any dependencies of our routes from dependencies to the devDependencies in package.json to reduce the size of the initial Serverless/CloudFormation deployment. We also don’t need to include aws-sdk in our deployed code as it is always globally available within Lambda functions.

Let me know if you think this is a good idea. I’m going to see if I can put together a similar thing for Firebase Functions as deploying them just seem to take ages and fail all the bloody time.

--

--