VENoM Stack: Docker setup for local development

James Audretsch
5 min readFeb 9, 2018

--

Vue, Node (Express), MongoDB for local development with Docker.

I suggest cloning the code for this tutorial and following along, available at: https://github.com/jamesaud/VENoM-Docker

This tutorial is on how to do local development with Docker for NPM apps, not on how to write Vue.js or Node apps.

The app in this setup is from another pretty great tutorial called Build full stack web apps with MEVN Stack — if you’d like to learn how to build a Vue.js + Node app from scratch, follow this first.

Why VENoM instead of MEVN? Because it sounds much cooler:
Vue Express Node MongoDB :)

The App

First clone the repository:

git clone https://github.com/jamesaud/VENoM-Docker
cd VENoM-Docker

There are three components to this web app:
Frontend (Vue), Backend (Node with Express), and Database (MongoDB).

Let’s focus on Dockerizing (yep, that’s a word) each one separately, and then coordinating them together in a docker-compose file.

1. Dockerize the Front-end

The client folder holds our Vue.js client side code.

We are going to mount our client directory into a Docker container. This way we can edit our files from our host computer, but run them in a docker environment.

We can create a Dockerfile to run this app:

Path: client/Dockerfile

FROM node:carbon
Start from a light weight pre-configured Docker image that already has node installed.

EXPOSE 8080
Our Vue app runs on port 8080, so we expose that port.

WORKDIR /data/
basically runs cd /data/ in our container.

CMD [“npm”, “start”]
Run npm start after the user starts the container

COPY ./docker/entrypoint.sh entrypoint/entrypoint.sh
Add a entrypoint.sh from the host into the container

RUN [“chmod”, “+x”, “entrypoint/entrypoint.sh”]
Run a command to make the shell script executable

ENTRYPOINT [“/entrypoint/entrypoint.sh”]
We need to run npm install to install binaries for our app inside the Docker container. Why can’t we run this command on our local machine? The binaries are installed for the specific operating system — our machine’s OS is likely different than the Node:Carbon, which if you check their Dockerfile is based on Debian Jessie.

Running npm install directly in the Dockerfile would be nice. Unfortunately when we mount our host machine’s client directory into the container’s /data/ folder, it overwrites the contents (which is where node_modules would be installed).

Instead, we can use an entrypoint.sh file to run npm install after the user starts the container and mounts the client directory:

client/docker/entrypoint.sh

Line 2: Check if node_modules exists. If it doesn’t, run npm install. If it does, ask the user to delete node_modules if they installed it from their host machine. This way we won’t run npm install every time we start a container.

The only downside to this approach is that the user can’t run the app from their host machine and with docker at the same time.

Line 10: Execute the CMD arguments from the Dockerfile.

**There are other techniques for managing npm dependencies in a Docker container, such as using docker volumes to mount the node_modules folder. However, using docker volumes caused hot-reload to stop working and is more confusing to maintain, so this is my best fix at the moment. If you could specify a custom location for node_modules, it would fix this problem — if anyone knows how to do this, please comment and let me know!

Now let’s hook up the front-end service in a docker-compose file:

docker-compose.yml

client
the name of our service

build
path to the Dockerfile

ports
expose port 8080 to our host machine on port 8080, because Vue is running on this port in the container

volumes
mount the host’s client folder into the /data/ folder in the container

environment
pass an environment variable API_URL. This is used in the client/src/services/Api.js to interact with the server (not setup yet).

Okay, we are all set to start our service. Run:

docker-compose build    # Build the image, will take a minute
docker-compose up # Start the container for our service

Visit http://localhost:8080 to see it running! Saving posts won’t work yet because our server and database aren’t running.

2. Dockerize the Database

The beauty of Docker is that you can create an image that encapsulates a service, and expose http endpoints through port mappings. Custom configuration can be passed with environment variables.

Let’s utilize the mongo image to have a database that’s already configured:

docker-compose.yml

image
build from the mongo image (https://hub.docker.com/_/mongo/).

volumes
map the /data/db/ folder of the container to the host’s db folder. This way, when your container is deleted, the data won’t be

ports
mongo runs on port 27017, so it is exposed to the host’s 27017 port

If you want to, verify that the database starts by running:

docker-compose up

3. Dockerize the Backend

The server functions as an API endpoint to the front-end. It’s an npm based app, so we can use the exact same approach to for our Dockerfile. In fact, just look through the server/Dockerfile and server/docker/entrypoint.sh; it’s the same code as the client.

If you look at our server code, server/src/app.js, we also need to provide a DATABASE_URL environment variable.

Let’s add the service to docker-compose.yml:

docker-compose.yml

ports
we’re mapping port 8081 from the host to port 8081 on the container. Now, the environment variable API_URL in the client service will work.

volumes
The server folder is mounted into the /data/ folder in the container.

depends_on
the server container will wait until the server-database container starts, so we don’t fail to connect to the database

environment
DATABASE_URL will be used to connect to our database. The url we pass is server-database.

How does the docker container know what the url for server-database is? Well, the docker engine is pretty clever and automatically links together our containers in an internal network when using a docker-compose file. The containers are automatically available to one another through their service names. The service name is actually just an alias for the ip-address of the container, automatically mapped by Docker for us.

Run the app

We should be all set to go. Once again, to start the app run:

docker-compose build
docker-compose up

All three services should start; you’ll see the output in the terminal.

Visit http://localhost:8080 to see the front end app.

The server is running on port 8080, so you can test one of the endpoints like http://localhost:8081/posts to verify.

--

--