A better development environment for Serverless apps using Docker and docker-compose
-The application is not working
-Well, it works in my machine
Every developer, at least once
When you join a new team or just buy a new computer, the first thing you end up doing for your software projects is setting up a development environment: Install compilation tools, IDEs, local servers and databases, etc.
For a Serverless application, you will at least have to install NodeJS (the same version your project supports) and the Serverless package. Also, you have to make sure that all the ports used by your application are available in the local machine. If you are using plugins like serverless-dynamodb-local, you will also need to install Java.
It is known that Docker is an excellent tool to handle cross-platform development environments: Put your Serverless app in a Docker container along with all your dependencies and it will take care of setting up your environment for you.
In this post, we will use the Serverless application we have created in previous posts, along with its React UI, configure Docker images for them and execute them locally using docker-compose.
Install Docker
I will not get into a lot of details on this step. Docker’s documentation is very clear and the installation is simple.
Dockerize the UI
First, let’s take our React application we downloaded from Auth0 and put it in a Docker container. We will use a Node Docker image, copy our source code into the image, and use npm to install the dependencies, run the build and start the application.
In the project’s root, create a file called Dockerfile:
FROM node:6-alpine
WORKDIR /app
COPY package.json package.json
RUN npm install
COPY src src
COPY .babelrc .env karma.conf.js tests.webpack.js webpack.config.js ./
VOLUME /app/src
CMD ["npm", "run", "start"]
Let’s go step by step of the Dockerfile. First, we select Node 6 as our base image, in its Alpine version which give’s us a smaller image:
FROM node:6-alpine
Next, we choose /app as the root of the app and we copy package.json to it and we install the project’s Node dependencies:
WORKDIR /app
COPY package.json package.json
RUN npm install
Why do we only copy package.json? Because We want to take full leverage of Docker’s cache. For every command in the dockerfile, Docker creates a layer and caches it. When Docker detects that there are changes in the files you are copying, it will invalidate the cache of all the layers after that.
Out package.json don’t change as often as the rest of our code, so this order guarantees that even if we change the elements in our src folder, if there are no changes in package.json, Docker won’t execute npm install again; saving time during each build.
Next, we copy all the files that will be changed more often to the container:
COPY src src
COPY .babelrc .env karma.conf.js tests.webpack.js webpack.config.js ./
Since we want to take advantage of the live reloading provided by Webpack, we create a volume to our src folder. This will allow us to map the src folder in the container to the src folder in our local machine, allowing us to do changes which will be loaded in runtime by Webpack:
VOLUME /app/src
And at the end, we execute our command which will start the development server:
CMD ["npm", "run", "start"]
Since we are using Webpack dev server with the hjs-webpack plugin, by default it starts the server to the hostname localhost. We need Docker to be able to listen to the dev server, that’s why we need to configure hjs-webpack to use ‘0.0.0.0’ as the hostname:
var config = getConfig({
isDev: isDev,
in: join(src, 'app.js'),
out: dest,
port: 3001,
hostname: '0.0.0.0',
html: function (context) {
return {
'index.html': context.defaultTemplate({
title: 'auth0 React Sample',
publicPath: isDev ? 'http://localhost:3001/' : '',
meta: {
'name': 'auth0 React Sample',
'description': 'A minimal reactJS sample application showing auth0 integration'
}
})
}
}
});
Dockerize the Serverless app
The Dockerfile for the Serverless app is a little more complex:
FROM node:6
WORKDIR /app
RUN echo "deb http://http.debian.net/debian jessie-backports main" | \
tee --append /etc/apt/sources.list.d/jessie-backports.list > /dev/null && \
apt-get update -y && \
apt-get install -t jessie-backports openjdk-8-jdk -y && \
update-java-alternatives -s java-1.8.0-openjdk-amd64
COPY package.json package.json
RUN npm install && npm install -g serverless
COPY .npmignore serverless.yml webpack.config.js ./
RUN npm run install:dynamodb
COPY .env .env
COPY src src
COPY config config
VOLUME /app/src
CMD ["npm", "run", "start"]
In a step by step, first we select node:6 as our base image. Notice that we are not using the Alpine version, as it was causing some conflicts with dynamodb-local; for that reason we use an image based in Debian Jesse. Also, we setup our base directory to /app:
FROM node:6
WORKDIR /app
Now, since we are using the plugin serverless-dynamodb-local, we need to install Java:
RUN echo "deb http://http.debian.net/debian jessie-backports main" | \
tee --append /etc/apt/sources.list.d/jessie-backports.list > /dev/null && \
apt-get update -y && \
apt-get install -t jessie-backports openjdk-8-jdk -y && \
update-java-alternatives -s java-1.8.0-openjdk-amd64
Just like we did with the UI app, we copy our package.json file, and we install the npm dependencies, including the Serverless framework in Global scope. We also add files that Serverless need in order to start the serverless-offline plugin:
COPY package.json package.json
RUN npm install && npm install -g serverless
COPY .npmignore serverless.yml webpack.config.js ./
We install the serverless-dynamodb-local plugin:
RUN npm run install:dynamodb
Just like we did with the UI app, we copy the files we expect to update more often, we create a volume to our src folder and we start our development server:
COPY .env .env
COPY src src
COPY config config
VOLUME /app/src
CMD ["npm", "run", "start"]
Also, we need to configure the Serverless app’s serverless-offline plugin to use ‘0.0.0.0’ as the hostname, in order for Docker to be able to expose it. In our serverless.yml, we configure it like this:
custom:
webpackIncludeModules:
packagePath: './package.json'
dynamodb:
start:
migrate: true
serverless-offline:
host: 0.0.0.0
babelOptions:
presets: ["es2015", "stage-2"]
Start the environment with docker-compose
Now that we have both of our apps in Docker containers, we can glue everything with docker-compose.
First, make sure that the projects containing the source code for both applications are inside the same common folder, for this example Serverless. In the parent directory, we create a file docker-compose.yml:
/Serverless/
/taco-gallery/
/taco-galery-ui/
/docker-compose.yml
Inside docker-compose.yml:
version: "2"
services:
taco-gallery-ui:
volumes:
- "./taco-gallery-ui/src:/app/src"
ports:
- 3001:3001
build: ./taco-gallery-ui/.
taco-gallery-server:
volumes:
- "./taco-gallery/src:/app/src"
ports:
- 3000:3000
- 8000:8000
build: ./taco-gallery/.
In the docker-compose file, we define two services, one per each of our applications. We make sure we map the src folder in our local source code to the src folder in each of the containers using volumes. We expose the ports we need to be able to be reachable from our browser (if you are using different ports, configure them here), and we pass the path to both applications using the build section.
Now, you can start your application with the following command:
docker-compose up -d --build
The first time you run it, it will take a long time to install all the dependencies, as it starts from the base Node images. In subsequent calls to that same command, Docker’s cache will kick in and it will build a lot faster.
You can now navigate to your web app at http://localhost:3001 and you should be able to see it running successfully.
Docker allow us to have a developer environment working out of the box, without having our developers to worry about installing the correct versions of the tools and dependencies our app needs. This extra layer of isolation allows us to separate problems in our applications of problems introduced by a badly configured developer environment. Serverless applications can leverage of this pattern.
Please notice that the Docker images we created here are for development, and they should be used in production directly. Ideally for producton, you want to deploy your UI to something like Apache, Nginx or AWS S3 and your Serverless app to something like AWS Lambda and AWS API Gateway.