There’s lots of information out on the web surrounding scaling a MEAN app using Docker containers, however I couldn’t find much on replicating MEAN apps using the same Docker Image.
Using environment variables (with the magic of Docker-Compose to simplify) you can run multiple independent MEAN apps from the same Docker image.
Say I have created a super cool MEAN app, called RED SITE. Its so cool that I want another one — BLUE SITE! Well, when I copy the Node code and launch it (on a different port) I get a conflict as the BLUE SITE code is pointing at the same database as RED SITE. Uh oh!
I go ahead and change the code of BLUE SITE to point at a different database and all is well, only I now have two slightly different versions of my code that I have to maintain.
This may seem fine in this simple example, but there could be multiple code changes or multiple site replications, so I was looking for a better solution using Docker.
First of all, I tried just creating a Docker image containing my Node code, pointing to a local MongoDB (that I would Dockerise later). I started my RED SITE and BLUE SITE containers from this image, but hit the same database conflict, as the Node code is identical.
All in one!
To solve this, I placed my MongoDB server inside the Docker image alongside the Node code. Now when spinning up my RED SITE and BLUE SITE containers, the databases they connected to were isolated. The code remained the same, referencing the same database, but the MongoDB servers were independent.
This creates a new problem: now my data is stored within the container it is lost every time the container is killed. This means if I ever want to update my Docker image, I would lose my data. In order to prevent this I mounted a local storage volume to /data/db/ within the container.
There are other issues with this method though. Firstly it goes against Docker best practices of placing more than one application in a container. It also wastes a lot of resources as a MongoDB server has a lot of overhead per instance.
The best way I found to solve this was to use Environment variables. I moved my MongoDB server into a separate container, and then edited the Node code in my Docker image to determine the database to connect to based on Environment variable data. Now when starting a container from my image, I can pass an Environment variable that tells the Node code what database to connect to.
This is great, but it requires a lot of Docker commands and arguments to run every time I start a container. I also had to make sure my MongoDB container was visible to my Node code containers.
A nice way of simplifying this is to use docker-compose, as this creates and configures all the docker networks, environment variables, volumes and ports when run, so you can run multiple instances of the application with the database very easily.
The great thing about this is that every application referenced in the docker-compose config file is visible to all other containers in the cluster by using host files. This way my Node code can reference the MongoDB without having to care where it is, or expose the MongoDB port outside of the Docker network.
I can now create a GREEN SITE and PURPLE SITE within seconds!