Create React App + Docker — multi-stage build example. Let’s talk about artifacts!
Artifact (software development), one of many kinds of tangible byproducts produced during the development of software
Have you ever been part of a project that required some kind of ‘build’ or ‘compilation’ step? Perhaps a project where you manually edit your ‘source files’ first, then you run some command to produce the final ‘production-ready’ assets?
Just want to see the code?, you can skip straight to the repo I setup for this post
Terminology: Every time you see the work ‘artifact’ in this post, I’m referring to something like an executable, directory or anything else that is produced as part of your workflow — it’s typically the thing you want to run on a server & it’s almost always excluded from version-control.
Build tools have extremely different requirements to those of your production App.
This is a big problem. Whether it’s a simple blog generated from Markdown files, a fully-fledged SPA written in something like Angular or React, or any other type of project that uses tooling — the dependencies required to ‘build’ your project — that is, take your source files and produce the actual thing you want to throw on a server — are vastly different.
Just take a look inside your `node_modules` directory (or equivalent in your chosen lang/env) — I bet all (or most) of those packages will only be required for the build process — and if that’s true, they have no place on a production server, ever!
Of course, no-one right now would admit to building their projects on the same server that runs their App, but we all know it happens… too often.
To get around this problem, the more responsible projects out there will tend to do one of the following:
- 1) Have developers run the ‘build’ command locally, producing an artifact that is then ‘uploaded’ somewhere, or added to a docker image etc…
- 2) Have a separate CI service sitting in between Github & the production server — the artifact can be produced there instead and then deployed to a server…
- 3) Run the build process on the same server as that which will run the app in production…
But now, for those using Docker, there’s a better way. A technique that allows you to consolidate your build + production setup to a single Dockerfile. This has huge implications for the future as it allows things such as auto-builds/deployments often without the need for yet another 3rd party service
Docker multi-stage builds
No need for jargon here, the concept is so simple it’s brilliant.
- 1) Create the environment needed for your build process
- 2) Run that build process to produce your artifact
- 3) Create your production environment
- 4) Copy the artifact into your production environment
- 5) Discard EVERYTHING ELSE from the build environment.
- 6) profit?…
The fact that Docker handles all of this complexity is amazing — now let’s run through a real-world example to fully understand it.
I’m going to use the popular
create-react-app CLI tool in this example, but you can take the concept and apply it to any similar situation.
Tutorial using '
Step 1: Install
yarn global add create-react-app
Step 2: Create a new project
- After creating a new project, you’ll notice you have a ‘src’ directory containing the files you should edit in development.
Step 4: Add build process to Dockerfile
We’ll build upon the latest official NodeJS Docker image, which comes with
FROM node:7.10 as build-deps
COPY package.json yarn.lock ./
COPY . ./
RUN yarn build
- On line 1, we’re using the
FROM <image:tag> as <name>format which is new to Docker 17.05.
as build-depspart allows us to name this part of the build process. That name can then be referred to when configuring the production environment later.
- On lines 4 & 5 we copy
yarn.lockinto the image and then run
yarn— this separates the dependency installation from the edits to our actual source files. This allows Docker to cache these steps so that subsequent builds — one’s in which we only edit source files and don’t install any new dependencies — will be faster.
- Next on lines 6 & 7 we copy everything else into the image and then run the
buildcommand. This will produce the ‘artifact’ inside of the
builddirectory — just as it would if you were to run this command locally!
- Be careful,
copy . ./can be quite dangerous is it will copy the entire current directory into a build context, which may be huge! Add a
.dockerignorefile to combat this, mine would look something like https://gist.github.com/anonymous/f1b3e2cc530900338a0c38bce5e0e4c1
Step 5: Add production environment to the SAME Dockerfile
This is where things start to get seriously interesting! In the exact same Dockerfile we can add the setup for our production environment, right below the setup for the build process!
When Docker sees a second
FROM statement, it will begin an entirely new ‘build stage’ — which includes NOTHING from the first step. That’s right, the whole thing is discarded… kind of. Crucially it does allow you access to the previous builds file system. So, this is where everything starts to come together and make sense, because Docker allows you to selectively copy anything you like from the first build step, into the second one!
This means we can grab hold of the
build directory, which is our ‘artifact’ and discard everything else from the first step. So everything about the base NodeJS Docker image is discarded, along with all the files we don’t choose to copy over into the new build step.
In this example, with
create-react-app , that means we get to wave goodbye to the 22,676 files required to build the project — none of those are needed to serve the application, so we don’t want them lingering around!
Let’s see it in action
COPY --from=build-deps /usr/src/app/build /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
- We’re using one of the official nginx images here to serve our application, but this could be any other type of server — I only chose nginx as it’s popular & I know how to configure it :)
- On line 2 is the shiny new stuff. The
COPYstatement has been around since Docker first hit the scenes, but so far it’s been limited to copying files from a context (like a host) into an image. The new part is the flag
--from=build-deps— if you remember back to the first stage,
build-depsis the name we gave that stage, and this is how we can refer to it here.
- Again on line 2, we know that
builddirectory as an artifact, so we add that path to the working directory and end up with:
/usr/src/app/buildthis is the absolute path of our artifact inside the first stage.
- So, we know how to access the artifact from the first stage, now we just need to copy it into the correct place in our production environment — and because we’re using stock nginx, that directory is
- The final 2 lines are just the regular docker commands to expose a port and run the server when a container start.
Step 6: Build the image!
Now we get to put it all together — we have both our development build process & production environment specified in a single Dockerfile — it should look something like:
Now we can instruct Docker to create an image from this:
docker build . -t shakyshane/cra-docker
docker build .instructs Docker to use the current directory as it’s build context
-t shakyshane/cra-dockerinstructs Docker to ‘tag’ this particular build. In this case I’m naming the image as it would appear on my Docker Hub account, but you can use any tag name you want.
Step 7: Run it locally to test it works!
After running the previous build command, you can now use the tag name to start a container from this image.
docker run -p 8080:80 shakyshane/cra-docker
-p 8080:80allows us to map the port
8080on our local dev machine to port
80inside the container — you can omit the first part if you’re happy for Docker to assign a random port for you, eg:
-p 80will result in something like
http://localhost:32888— which will change each time you run it.
- If it all worked well you should now be able to see the following in your browser:
Now that you’ve seen how to build and serve your project with Docker, you can go ahead and take advantage of everything that containers have to offer, some examples are:
- 100% consistent builds across any machine that can run Docker
- Fully automated deployments via a service like Docker Cloud (check this example which includes fully-automated SSL certs)
- Run your EXACT production setup locally before deploying
- Combine with other services, eg: for a frontend App you might want to add something like a CouchDB instance — this is a trivial task with Docker.
- etc etc
Docker is taking over the world, and with every release it’s getting easier for regular devs to utilise its power!
The example given here does not include the production-ready configuration for the nginx server — I didn’t want the details of such a thing to cloud the content of this post — if there’s demand I can follow up with a post detailing that!
Use multi-stage builds
Multi-stage builds are a new feature in Docker 17.05, and they will be exciting to anyone who has struggled to optimize…
Like this? If you did, and you find yourself doing any front-end work, perhaps you’d enjoy some of my lessons on https://egghead.io/instructors/shane-osbourne— many are free and I cover Vanilla JS, Typescript, RxJS and more.