A Beginner’s Guide to Creating a Containerized Web Application With Docker, React, and Laravel.

Jonathan Harvey
13 min readAug 14, 2018

--

Docker changed my life.

Bear with me, I know that’s a bold statement, and I’d hesitate to write it if it were any less true. If you’ve been programming for a while, you’ve probably experienced that pain induced by trying to manage and deploy virtual machines. If you weren’t around when that was a thing, you might not know why so many developers and companies are choosing to switch to containerized development and deployment.

Just like VMs, containers are a form of virtualization, and images and snapshots are relatively easy to create with both containers and VMs. Both technologies support creating a modern and scalable cloud infrastructure. Where containers excel is in the fact that they are lightweight. The philosophy underlying containers promotes ephemerality, and with tools like Docker Swarm, Kubernetes and Ansible, managing application clusters has become a breeze. Containers are agile, portable, secure, and they are here to stay. If I had to emphasize one strength of using containers, it is that containers make it easy to manage your infrastructure in source. The business value of this is immeasurably high (read: this is why your boss wants you to know Docker).

If the last two paragraphs haven’t yet convinced you yet don’t worry, I have more tricks up my sleeve. What better way to convince you that you should be working with containers than to just dive in and create an application?

In this article we’ll walk through the basics of setting up a Dockerized development environment with React and Laravel, while getting practice with Infrastructure as Code (IAC) by using Dockerfiles and Docker-compose. This could be a good introduction to Docker for those who haven’t worked with it before, or a reference for those looking to run their React and Laravel projects from inside containers. Let’s jump in.

Getting Warmed up With Hello World & Using 3rd-Party Docker Images

The first step is getting the preliminaries out of the way. We’ll be using docker for our containerization, so the first step is to make sure Docker is installed on your machine. To learn more about Docker and containerization, check-out the Docker overview page.

For Mac users, you can find the download and installation instructions here. Windows users should look here, and at the bottom of this page is a list of installations for server platforms like Debian, Ubuntu, and CentOS.

We’ll also be using Docker-compose to run our application stack locally, so make sure you have that installed as well.

After you’re all done with the installations, let’s make sure you’re fully good to go. Open up a shell window and type docker run hello-world. If everything is installed correctly, you should see something like the following:

Unable to find image 'hello-world:latest' locallylatest: Pulling from library/hello-world9db2ca6ccae0: Pull completeDigest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdcStatus: Downloaded newer image for hello-world:latestHello from Docker!This message shows that your installation appears to be working correctly.

If you see this, you’re good to go.

There are three ways of working with docker images,

  • Using Third Party Images From Docker Hub (or .tar files and docker load)
  • Creating your own images from a minimal base image like Alpine Linux or Debian
  • A mix of the two, by starting with a library image, customizing the image, and re-tagging it.

We’ll use the first two strategies in this tutorial.

Using a 3rd-Party Docker Image

Before we dive into the nitty-gritty, it may be helpful to see see an example of just how easy docker makes it to run your applications.

Want to spin up a WordPress blog and start writing about your side project? Let’s do it in a matter of minutes.

  1. Navigate to some directory where you want this project to live
  2. Create a file in this directory named docker-compose.yml
  3. Paste the following contents into the file, courtesy of the Docker documentation:
version: '3.3'

services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress

wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data:

In your terminal window, run docker-compose up….Presto! Navigating to localhost:8000 in your web browser and selecting your language will take you to the Wordpress setup screen.

The result of navigating to localhost:8000 after running docker-compose up

Is this magic? Witchcraft? No, it’s sufficiently advanced technology, it just looks like magic to the people sitting next to you in the coffee shop. Granted, we’re not here to impress them, since that’s relatively easy. I mean, they already thought you were hacking the Pentagon the second you opened your terminal window. As a note, don’t try to run this container in production, it’s just to get started and we haven’t talked about securing the installation or what it takes to run a project with docker-compose in production. If you’re interested in learning more about the Wordpress docker image, check it out here.

We’ll need to stop the Wordpress container in order to use the port it’s forwarding for later. In a terminal window from the folder in which you added the docker-compose.yml file, run

docker-compose down

and it will stop the Wordpress container.

Let’s de-mystify some of this stuff.

Of course, the real power in Docker is in making our own images. This typically involves picking a minimal Linux distribution, listing out the commands used to build the environment needed for your project in a Dockerfile, and then building the image for use.

Let’s try this on our own. We’ll start by creating a Docker environment that we can use to run our client-side application in ReactJS. There are plenty of pre-built Node.js images on Docker’s image hosting service, Docker Hub. Typically, you’d use one of these. We’ll start from scratch in order to demonstrate the general principles around building images, as well as to allow a greater level of customization for the image configuration. Here’s the steps we need to take to get up and running with a containerized React environment:

  1. We need to define the environment. What libraries and utilities need to be installed on the image to run our application?
  2. We need to test out and write out the commands for building the image in a Dockerfile.
  3. We need to build and run the image and learn how to run the project.

Defining The Environment

We’ll use create-react-app to install our React source. It’s the easiest way to setup a React project with minimal configuration. We’ll install Laravel and create a new project with Composer.

For our client-server application, we’ll have the following folder structure that looks like the following.

- myapp
- client
- server

go ahead and create this directory structure.

Navigate into the client folder atmyapp/client and create a file named Dockerfile. This is the file we’ll use to define our Docker image configuration. At a minimum, our Dockerfile needs to know what starting point we’d like to build from. Add the following line to your Dockerfile:

FROM alpine:latest

This tells Docker that we’d like to build our image using the minimal Alpine Linux distribution. We’re aiming to keep our images as small as possible, since this will reduce build times as well as the amount of space required to store the image.

To build the image, run the command

 $ docker build -t myclient:latest . 

This will download the latest version of Alpine Linux and bake it into your image. Docker will name this image myclient since we passed the tag flag with-t. The :latest piece is a tag that tells docker which version of your image you’re building. It could just as well have been myclient:1.0. As you develop your images based on your needs, it may be good to pay attention to the way you version them. There are plenty of good articles about image versioning out there that are beyond the scope of this one. The . in the command is necessary, and it’s what’s known as the build context. When building your images, Docker has access to files and folder that are in the sub tree of the build context you pass it. In this example, your Docker build has access to everything in ./* , which is useful when we want to extend our Dockerfile to include files and folders from your machine in the build (which we will do later). It is also necessary to run the build command.

Type the command docker images into your terminal. You should get and output that looks more or less like this:

REPOSITORY     TAG     IMAGE ID          CREATED              SIZE
myclient latest 11cd0b38bc3c 4 minutes ago 4.41MB

We see that this command will list all of the images that we’ve built and have available to use. Look at how small that image is to start! Linux in 4.3MB. As you continue to use Docker, this will show both images that you’ve built locally as well as those you’re using from Docker Hub. To run the image inside a container, try this:

$ docker run myclient:latest

You’ll notice nothing happens :). When I was first starting out with Docker, this behavior confused me a bit. I would have an image build and I would try and run it and would have the same experience. We could check and see if Docker is running the container with the command

$ docker ps

Which will give you a list of running containers. We verify that our container isn’t running. Here’s the thing: It did, and then it stopped. This is because we failed to tell docker what program we wanted to run inside the container.

When working with a fresh Linux install on a VM, or when SSHing into a server, you’re typically greeted with a magically appearing sh or bash terminal shell. With Docker, you explicitly have to tell it what to run inside your container — unless you specify an ENTRYPOINT or CMD in your Dockerfile. Let’s try this

$ docker run -it myclient:latest /bin/sh

You should be presented with / # , this is your shell prompt. We’re in! This is your shell access into a running container instance of the Alpine Linux myclient image we’ve built, running on the Docker daemon. The -it flag tells docker to run in interactive terminal mode, and /bin/sh tells docker that it should run the shell. Kind of cool, huh?

Here’s my recommendation for building Dockerfiles, I like to build my environment inside the container, while making note of all of the commands I run inside the container in a text file somewhere. This way you get instant feedback for your build commands as opposed to having commands fail as you try to build the image. After we verify that our container is correctly configured, we can copy the commands to our Dockerfile in order to commit our image configuration to source.

To get some serious practice working with Dockerfiles, let’s setup an environment for our client application.

Jump back into the container if you’re not already there

$ docker run -it myclient:latest /bin/sh

And install nodejs

/# apk update
/# apk add nodejs
/# apk add nodejs-npm
/# node -v

If all is well, you should see the latest version of Node (and npm) is installed and ready to use. We’ve configured the environment inside the container, but if you have experience with containers, you’re probably thinking what I’m thinking. One of the core philosophies of containers is that they are meant to be ephemeral; short-lasting and disposable. In fact, many people recommend stopping production containers and spinning up new ones frequently for security purposes, most applications of containers rely on port-forwarding thereby exposing the container to the web, and if an adversary penetrates a container, it reduces the amount of time they have inside the container trying to do whatever it is they’re trying to do. In this vein, we have to figure out how to bake these changes right into the image.

If you haven’t done this before try exiting your container with /# exit, re-entering the container checking /# node -v. Bummer. We verify that the new container instance doesn’t have node installed any longer. This is because the Docker container doesn’t persist changes we make inside the container to the actual myclient Docker image. Imagine if you had 100 versions of that container running and each was modifying the image as processes ran inside the container.

What we need to do is to extend our Dockerfile to container the software installations we need for our project.

Edit the Dockerfile we started at the root of myapp so that it looks like this

FROM alpine:latestRUN apk update
RUN apk add nodejs
RUN apk add nodejs-npm
RUN node -v

and build the image

$ docker build -t myclient:latest .

and now we’ve baked those configuration steps into source and updated the container image so that it has Node.js installed.

You can verify this

$ docker run -it myclient:latest /bin/sh
/# node -v
/# exit

Okay, we’re in good shape. We have the Node.js installation saved in our image and the configuration persists. Again, we’re taking these steps so that we understand everything from ground zero. Sometimes it’s better to stand on the shoulders of giants…The following one-line Dockerfile is pretty much equivalent to what we’ve just done

FROM node:latest

This is the pre-built Node library image from the Docker Hub. To learn better, we’ll continue with our custom image for this tutorial.

Let’s go more practice working with Dockerfiles by prepping an image we can use as our Laravel development environment. Again, we could use something like Laradock (which is fantastic by the way), or any of tens of pre-build images available from Docker Hub or Github. For our purposes, that’d be cheating. It’s good to know the details for when your project gets larger and you start really having to deep-dive. You’ll want to know your infrastructure like the back of your hand, otherwise, a critical failure in it will take too long to fix while customers can’t access your application and your boss is DM-ing you on Slack every 15 minutes. What I will say about things like Laradock and other pre-build images is that you can find actively maintained images. To be clear, if you choose to use your own images in production, it’s up to you to patch the software inside the image to ensure vulnerabilities don’t creep into your containers.

I do care about readership, so I won’t painstakingly take you through the line-by-line process of creating a functional Laravel environment from scratch on Arch Linux. That would be cruel & unusual punishment. Here, have a gist instead, paste it in myapp/server/Dockerfile

Part 1 configures the environment. Docker allows you to use ENV to do environment-replacements. Part 2 is a list of the commands we’d run inside the container in order to install Nginx, since we’ll need a server to manage requests to our application. Note the use of

command \
&& command

to run commands in series. Part 3 is the installation of php7 and the core extensions you’ll need to get going with Laravel. Part 4 configures the php environment, and Part 5 installs composer. Lastly there is some additional configuration. You can EXPOSE ports on the containers to allow traffic into the container. Here we use expose both 8000 and 443 so that we could serve an https version at 443 if we wanted to (and we’re smart). The COPY command will copy a file from a path in the build context that you specify into the container at a path you specify as the second argument. CMD provides a default executable for the container, where the first string in the array is the path to the executable to run and the additional strings in the array are arguments passed to the executable. There is another command we could have used in its place, and if you’re interested check out the top-rated Stack Overflow answer on the differences between CMD and ENTRYPOINT.

Paste the contents of this gist inside of a Dockerfile at myapp/server, and from inside that directory build the image:

$ docker build -t myserver:latest .

Oops, that was my bad. The build will fail if files passed to COPY don’t actually live in the directory passed as the source argument. This is something to watch out for. Create a file named nginx.conf in the myapp/server directory and paste in the following contents into it

We have easy access to our Nginx configuration, and now try the build again, it should work. Depending on your CPU and network speed, you may need to go grab a cup of coffee while it builds.

Installing React

There’s a way to get up and running with React without needing to have Node installed on your machine. From inside of the myapp folder use

$ docker run -it --mount type=bind,source="$(pwd)"/client,target=/usr/src/app myclient:latest /bin/sh

Note that "$(pwd)” is meant to be a command substitution for the current working directory. Depending on your platform, you may have to replace this with the absolute path to your myappdirectory. When we specify the mount type as bind, Docker expects an absolute path for the source. We’re specifiying a mount point to the Docker run command so that changes we make at the folder inside the container will persist in our myapp directory, since the container will treat the folder as a volume mounted inside the container. Then, from inside the container

/# cd /usr/src
/# npx create-react-app client ; mv client/* app/ ; rm -rf client
/# cd /app ; npm install

We choose to install the software inside the container because then we don’t have to worry about having Node.js and npm installed on your local machine. It reduces the number of prerequisites we need to install on the machine to complete our setup. We’ll follow the same approach for the Laravel installation, so you won’t have to have php and Composer installed globally on your machine in order to install the required Composer modules and generate an app key.

Installing Laravel

From your terminal, run the myserver images with the server folder mounted inside of the container:

$ docker run -it --mount type=bind,source="$(pwd)"/server,target=/var/www myserver:latest /bin/sh

And then from inside the container

/# cd /var/# composer global update/# composer create-project --prefer-dist laravel/laravel server ; mv server/* www/ ; rm -rf server/# cd www ; chmod -R 775 storage/# wget https://github.com/laravel/laravel/blob/master/.env.example/# cp .env.example .env/# php artisan key:generate/# exit

Running the Project

We could bring up the project by running the commands

$ docker run -p 80:3000 --mount src=client,target=/var/www myclient:latest npm start$ docker run -p 8000:8000 --mount src=server,target=/var/www myserver:latest

every time we need the environment, but that wouldn’t be any fun. We can add a docker-compose.yml file to the root of our myapp directory with the following contents to better manage our stack:

from myapp run

$ docker-compose up

and there you have it, your client should be accessible at localhost:80, and your Laravel server at localhost:8000. Run this command whenever you want to spin up your dev environment, and run

$ docker-compose down

from the directory to bring it down.

Enjoy!

--

--

Jonathan Harvey

Budding Developer currently building my skillset while helping plan and execute IT and software strategy for various organizations and SASS products.