How to get headless Chrome running on AWS Lambda

An adventure in getting Chrome (read: Chromium) to run “serverless-ly” from compiling it to deploying it on AWS Lambda.

Marco Lüthy
Mar 17, 2017 · 14 min read


  • Headless Chrome is a thing.
  • You can run it on AWS Lambda (with some effort).
  • This article walks you through how to compile and run it on Lambda.
  • I created the serverless-chrome project so that you can immediately start using headless Chrome on Lambda instead of reading the rest of this article.


I’ve done a number of projects in the past which, in some way, made use of PhantomJS. Usually something along the lines of testing, scraping, or for generating PDFs. When I came across NightmareJS (think CasperJS but with Electron instead of PhantomJS and less emphasis on testing) a few days ago, I wondered, “Can I use this to generate PDFs from a URL?” The answer to that was yes; with the .pdf() method. I’m a fan of AWS Lambda, “serverless” and FaaS in general and as a result the next thing I wondered was whether or not I could run NightmareJS on Lambda.

For shits ’n’ giggles, I decided I’d have a go at using NightmareJS on AWS Lambda. That’s when I came across this Issue discussing how to run NightmareJS “headlessly” on Linux. The problem was, Electron, which NightmareJS uses for rendering and interacting with web pages, requires a windowing system or framebuffer like X or Xvfb to run. In other words, Electron (read: Chromium) wasn’t trulyheadless.” This was important because there’s no windowing system on Lambda. In the same Issue thread, I came across this comment which had a link to this Issue on the Chromium issue tracker discussing a headless Chrome. This was the first time I’d heard about headless Chrome.

I had two thoughts:

  1. I could try to build and package the Xvfb binary and include it in my Lambda function’s deployment package, then I could use NightmareJS (read: Electron) in my Lambda function’s handler. I had come across one two three different threads and attempts to do this so it seemed promising.
  2. I could try to build and package headless Chrome and include it in my Lambda function’s deployment package, then use the Chrome Debugger Protocol in my Lambda function’s handler to control/drive Chrome.

I surmised that there was an immediate problem with the former: AWS Lambda deployment limits. Lambda limits the size of a functions deployment package (.zip) to 50 MB. (Or does it?) The uncompressed size of code/dependencies that can be zipped into a deployment package is limited to 250 MB. My worry was that, packaging together the Xvfb binary and NightmareJS with it’s Electron dependency would exceed one or both of those limitations.

To be honest, I didn’t investigate much further. I was excited about headless Chrome and my self-assigned mission was clear: Get headless Chrome running on Lambda.

Building headless Chrome for AWS Lambda

Compiling a non-debug build of the headless Chromium shell yields a binary that’s ~125 MB and just under 44 MB when gzipped. This means it fits within the 250 MB uncompressed and 50 MB size limitation for a Lambda function’s deployment package with enough space left over for some code to do something useful.

We need to compile Chrome in an environment which is as similar to the Lambda Execution Environment as possible. The easiest way to do this is with an EC2 instance that shares the AMI which Lambda is based on.

Preparing the EC2 Instance

The following steps are based on this and this.

Create a new EC2 instance using the community AMI with name amzn-ami-hvm-2016.03.3.x86_64-gp2 (in the us-west-2 region its identified as ami-7172b611).

Pick an Instance Type with at least 16 GB of memory. Compile time will take about 4–5 hours on a t2.xlarge, or 2–3ish on a t2.2xlarge or about 45 min on a c4.4xlarge. Remember to stop the instance after you’re done using it to avoid unnecessary charges!

Give yourself a Root Volume that’s at least 30 GB (40 GB if you want to compile a debug build — but you won’t be able to upload it to Lambda because it’s too big.)

Launch the instance and SSH in. Then run:

sudo printf "LANG=en_US.utf-8\nLC_ALL=en_US.utf-8" >> /etc/environmentsudo yum install -y git redhat-lsb python bzip2 tar pkgconfig atk-devel alsa-lib-devel bison binutils brlapi-devel bluez-libs-devel bzip2-devel cairo-devel cups-devel dbus-devel dbus-glib-devel expat-devel fontconfig-devel freetype-devel gcc-c++ GConf2-devel glib2-devel glibc.i686 gperf glib2-devel gtk2-devel gtk3-devel java-1.*.0-openjdk-devel libatomic libcap-devel libffi-devel libgcc.i686 libgnome-keyring-devel libjpeg-devel libstdc++.i686 libX11-devel libXScrnSaver-devel libXtst-devel libxkbcommon-x11-devel ncurses-compat-libs nspr-devel nss-devel pam-devel pango-devel pciutils-devel pulseaudio-libs-devel zlib.i686 httpd mod_ssl php php-cli python-psutil wdiff --enablerepo=epel

Yum will complain about some packages not existing. It didn’t stop me from building the headless Chromium shell so I didn’t looked into them. Shut up Yum. No one likes you. Let’s ignore it and move on. Next:

git clone
echo "export PATH=$PATH:$HOME/depot_tools" >> ~/.bash_profile
source ~/.bash_profile
mkdir Chromium && cd Chromium
fetch --no-history chromium
cd src

Building the source

Currently, for Linux builds, the Chromium source code is hard coded to make use of the tmpfs mounted at /dev/shm. This is a problem because AWS Lambda containers don’t have a tmpfs mounted—and there’s no mount command installed so you can’t mount one, nor does your Lambda function have write permission to create /dev/shm.


Or not.

Let’s modify the Chromium code so that it doesn’t use /dev/shm. In fact, this is the fallback behaviour in the code! Open up src/base/files/ and modify the GetShmemTempDir() function such that it always returns the OSs temp dir (/tmp). A simple way to do this is to just remove the entire #if defined(OS_LINUX) macro block in the GetShmemTempDir() function. A less drastic change is to hardcode use_dev_shm to false:

bool GetShmemTempDir(bool executable, FilePath* path) {
#if defined(OS_LINUX)
bool use_dev_shm = true;
if (executable) {
static const bool s_dev_shm_executable = DetermineDevShmExecutable();
use_dev_shm = s_dev_shm_executable;
// cuz lambda
use_dev_shm = false; // <-- add this. Yes it's kinda hack-y
if (use_dev_shm) {
*path = FilePath("/dev/shm");
return true;
return GetTempDir(path);

With that change, it’s time to compile. Let’s pick things back up in the src directory. First, we set some build flags for building a release version of the headless Chrome shell:

mkdir -p out/Headless
echo 'import("//build/args/")' > out/Headless/
echo 'is_debug = false' >> out/Headless/
echo 'symbol_level = 0' >> out/Headless/
echo 'is_component_build = false' >> out/Headless/
echo 'remove_webcore_debug_symbols = true' >> out/Headless/
echo 'enable_nacl = false' >> out/Headless/
gn gen out/Headless

Now we’re ready to start the build:

ninja -C out/Headless headless_shell

Take a coffee/tea/beer break. Go on a walk. Twiddle your thumbs. Depending on your EC2’s Instance Type—this may take a while.

Once the build finishes, test out headless Chrome. In another terminal tab/window SSH into the EC2 instance with local port-forwarding:

ssh -i path/to/your/key-pair.pem -L 9222:localhost:9222 ec2-user@<the-instance-public-ip>

Upon SSH-ing in, start headless Chrome with:

Chromium/src/out/Headless/headless_shell --remote-debugging-port=9222 --no-sandbox --disable-gpu

On your local machine, open up your un-beheaded browser of choice and navigate to http://localhost:9222/. You should see something like this:


Finally, we make a tarball of the relevant file(s) we’ll need to run headless Chrome on Lambda.

cd ~/Chromium
mkdir out/headless-chrome && cd out
cp Headless/headless_shell Headless/ headless-chrome/
tar -zcvf chrome-headless-lambda-linux-x64.tar.gz headless-chrome/

Download the tarball to your local machine:

scp -i path/to/your/key-pair.pem ec2-user@<the-instance-public-ip>:/home/ec2-user/Chromium/src/out/chrome-headless-lambda-linux-x64.tar.gz ~/Desktop/chrome-headless-lambda-linux-x64.tar.gz

Once the tarball has been downloaded, we won’t need the EC2 instance anymore. Be sure to remember to shut down/stop the EC2 instance to avoid unnecessary charges. Now we’re ready to do something with headless Chrome!

Using headless Chrome in a Lambda function

To do something useful, we need a way to control or drive Chrome. Conveniently someone’s already thought of that and created the Chrome Debugging Protocol (CDP). If you’re familiar with PhantomJS, then you can roughly equate CDP to PhantomJS’s Javascript API interface for controlling/driving the headless browser. In our Lambda function, we’ll use CDP to interact with headless Chrome to make it do things like navigate to a URL.

We’ve got all the pieces, now we’ll glue them together.

Create a new Lambda function

There are a couple of great tools that simplify the creation/deployment of Lambda functions like Apex, Claudia.js, Gordon, Shep, DEEP, node-lambda, and Chalice, but for this example we’ll use Serverless as it’s the one I’m most familiar with.

First, make sure that you’ve got a recent version of Node and NPM installed. Then, install the latest version of Serverless with:

npm install serverless@latest -g

Let’s create a new directory for our code and initialise our Serverless:

mkdir chrome-lambda && cd chrome-lambda
npm init -y
serverless create --template aws-nodejs

This will generate two files we’re interested in. serverless.yml and handler.js. Open up serverless.yml in your code editor and modify it so that it looks like this:

service: headless-chrome-exampleprovider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-west-2
- ./**
- node_modules/**
- headless-chrome/**
- handler.js
memorySize: 1024
timeout: 30

For more details on what’s happening here, take a look at the reference documentation. In short, we’re telling Serverless that we want it to create a Lambda function called mimir in the us-west-2 region. handler is pointing at a file which will contain our Lambda function’s code. In our case refers to modulehandler.js's run named export (e.g. in handler.js we = function() { }.

Move the headless Chrome tarball we created earlier into our chrome-lambda project directory and uncompress it:

mv ~/Desktop/chrome-headless-lambda-linux-x64.tar.gz ./
tar -zxvf no-shm-chrome-headless-linux-x64.tar.gz

With this done, we’re ready to start writing the code which will spawn headless Chrome in our Lambda function.

Spawning headless Chrome

Before we can do anything with headless Chrome in our Lambda function, we have to assure that headless Chrome is running. We can do this by using Node’s Child Process spawn() function. Open up handler.js in your code editor and modify it to:

'use strict'const childProcess = require('child_process')
const os = require('os')
const path = require('path')
const cdp = require('chrome-remote-interface')
const get = require('got')
const LOADING_TIMEOUT = 15000
const STARTUP_TIMEOUT = 5000
const URL_TO_LOAD = '' = (event, context, callback) => {
const chrome = childProcess.spawn(
cwd: os.tmpdir(),
shell: true,

There are a few things to point out in the previous snippet:

  1. We’re passing the --disable-gpu flag to headless Chrome because there’s no GPU available to us on Lambda.
  2. We’re passing a couple of flags like --homedir with paths pointing to /tmp. This is because /tmp is the only place we have write permissions.
  3. --remote-debugging-port enables the Chrome Debugging Protocol which we’ll use to drive/control Chrome.

Once spawned, Chrome won’t be ready for us to communicate with it until it’s completed starting up. This takes a few hundred milliseconds. The most reliable way that I’ve tried so far is to try GET requests to Chrome until there’s a response (or time-out after a set period.) To do this, let’s add a npm module to simplify making GET requests:

npm install got --save

Next we’ll add a Promise which will resolve when Chrome is ready. Add the following to the end of the run function in handler.js:

const waitUntilChromeIsReady = (startTime = =>
new Promise(
(resolve, reject) => - startTime < STARTUP_TIMEOUT
? get('http://localhost:9222/json')
.catch(() => {
: reject()

Now the scaffolding is in place. We’ve got everything lined up to connect to Chrome and have it do our bidding!

Driving with the Chrome Debugging Protocol

Since we’ve decapitated Chrome and have no GUI or windowing system to interact with a webpage, we need a programatic way to drive headless Chrome. This is where the Chrome Debugging Protocol comes into play. Check out the Debugger Protocol Viewer to explore the CDP documentation and familiarise yourself with the different domains.

CDP works over WebSockets. It’s quiet easy to use a module like ws directly to connect a CDP instance, but we’ll use the Chrome Remote Interface module to abstract away some of the message event handling and simplify our code. Install the chrome-remote-interface:

npm install chrome-remote-interface --save

To demonstrate our Lambda function doing something useful, we’re going to load a page and record all the network requests that Chrome makes. Effectively, we’re recreating the “Network” tab in the Chrome DevTools. Using CDP (via a layer of sugar provided by chrome-remote-interface) we’re going to do the following:

  1. Create a “tab” in Chrome and connect to it.
  2. Enable the Network and Page domains so that we receive messages over the WebSocket from those domains.
  3. Navigate to a URL and wait for the page to finish loading, recording each network request we make along the way.
  4. Close the connection to the tab and return the list of network requests we made loading the URL from our Lamba function.

To do all of this, add the following snippet to the end of the handler’s run function:

.then(() =>
.then((client) => {
const url = URL_TO_LOAD
const { Network, Page } = client
const requestsMade = []
let doneLoading = false
const waitUntilPageIsLoaded = (startTime = =>
new Promise(
(resolve, reject) =>
!doneLoading && - startTime < LOADING_TIMEOUT
? setTimeout(
() =>
: resolve()
Network.requestWillBeSent(params =>
Page.loadEventFired(() => {
doneLoading = true
Promise.all([Network.enable(), Page.enable()])
.then(() => Page.navigate({ url }))
.then(() => waitUntilPageIsLoaded())
.then(() => {
callback(null, {
.catch((error) => {
throw new Error(error)
.catch((error) => {
message: 'There was an issue connecting to Chrome',

Here’s what’s happening in that code:

  • First, we waitUntilChromeIsReady().
  • Then we use chrome-remote-interface to open a connection to headless Chrome with cdp().
  • Then we’ve finally reached the code block which actually does something with Chrome!
  • Network.requestWillBeSent() is an event handler provided by chrome-remote-interface which is called whenever Chrome is about to make a network request. In our code, we just push each event to the requestsMade array.
  • Page.loadEventFired is another convenience event handler provided by chrome-remote-interface. It’s called once the full page has finished downloading and rendering. In our case, this means we’re done and can end our Lambda execution. In more advanced usages, you’d use this event to figure out if can start manipulating or interacting with the page’s DOM, for example.
  • Next, we wait for both the Network and Page domains to be enabled with Network.enable() and Page.enable().
  • Then we navigate Chrome to our url with Page.navigate().
  • Then we waitUntilPageIsLoaded(). This function will wait until Page.loadEventFired is called.
  • Then, once the page is loaded, we call client.close() which disconnects us from the WebSocket we have open to headless Chrome. Next we chrome.kill() which ends the headless Chrome shell’s process which we spawned at the very beginning. These are both important: failing to close either will keep the Lambda execution from ending until the Lambda function’s timeout is passed. Notably, it is possible to keep the spawned Chrome process running (for reuse in the next Lambda function invocation, for example, if our Lambda is “warm.”) You can keep Chrome around by making use of the detached property on the Node Child Process’s spawn() method instead of killing it.
  • Finally, we execute callback() and return our result payload from our Lambda function.

We’ve now completed our handler.js for our Lambda function. Time to try it!

Deploying the Lambda function

We need to set up our AWS credentials before we can deploy our Lambda function. Follow these instructions on creating AWS access keys if you don’t already have them. Then, either create an AWS profile on your computer with these instructions. Then:

export AWS_PROFILE=<your-profile-name>

Or, export your key and secret with:

export AWS_ACCESS_KEY_ID=<your-key-here>
export AWS_SECRET_ACCESS_KEY=<your-secret-key-here>

Now you’re ready to deploy. Deploy with:

serverless deploy

Finally, the big moment. Let’s invoke our function and run headless Chrome on AWS Lambda! Invoke the function with:

serverless invoke --function mimir

As shown in the screenshot on the left, you should see a semi-pretty-printed JSON object with the list of network requests we made including request headers and timing info while loading our url.

If you’ve run into any issues, all of the code for this Serverless function is available here.

Clearly, it’s an example. In its current form it’s not all that useful. But, there’s a lot of usefulness to be extracted out from the Chrome Debugging Protocol. Try it—experiment with some of the other domains.

Is running Chrome on Lambda a good idea?

Maybe. Maybe not. Like most things, it depends on what your goals are. Some things to consider:

  • Since we’re using the --disable-gpu flag, we’re missing out on GPU-rendering performance boosts.
  • We’ve disabled shared memory in /dev/shm which means Chrome can’t utilise tmpfs for performance gains
  • Starting up the headless Chrome shell takes a few hundred milliseconds even with the 1536 MB-sized Lambda function. Time is money, and it may be more economical to run headless Chrome on an EC2 instance instead with jobs processed serially or parallel-serially on a single instance rather than invoking a Lambda function for each job.
  • Most of your Lambda functions’ execution time will likely be spent just waiting while Chrome is downloading resources to render a web page. Again, time is money.


So that was a lot of effort. Having done it a few times now, I’m over it. For that reason I started serverless-chrome. It’s aim is to provide the scaffolding for using Headless Chrome in a Lambda function invocation. serverless-chrome takes care of building and bundling the Chrome binaries and making sure Chrome is running when your Lambda function executes so that that all you have to worry about is using the Chrome Debugging Protocol to drive. Over the next couple of weeks I’ll also add some “example” handlers for common patterns like grabbing a screenshot of a page, printing to PDF, some scraping/DOM manipulation, etc.

Thank you for reading!

Update — March 23rd, 2017:

AWS released support for Node 6.10. This article and source code have been updated to make use of Node 6.10. The biggest change was the removal of a Buffer.from polyfill. Section removed:

chrome-remote-interface has ws@2.x as a dependency which in turn uses Buffer.from and Buffer.allocUnsafe which were both introduced in Node v6. Since we’re stuck with Node v4.3.2 we have to polyfill them. Steve Yang has done a good job creating this polyfill. We’ll use a version adapted specifically for Lambda’s Node v4.3.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store