Dockerize and orchestrate your fullstack MEAN app easily

In the era of cloud services on demand, you may ask : why bothering to dockerize your app while you can deploy your frontend on a static AWS S3 hosting, your backend on lambdas, your API on AWS API gateway and your database on RDS ? Simply because it’s worth knowing how easy it is to actually create a fullstack app and deploy it all on a single hosting using docker compose orchestration.

For my side projects, I’m using a very basic hosting provider which is a dedicated server for less than few bucks a month running ubuntu server. I like using it to test side projects and for conveniency, I dockerize all the components and use docker compose to orchestrate all the containers. The main goal here is to deploy quickly your app for testing and experimenting on the field, rather than dealing with distributed services. Once you extract the value of your experiment and you see tractions for your app, then switching to other architecture is of course worthful.

In this article, we will cover
1. how to create and dockerize a MEAN app
and 2. How to orcherstrate all services using docker-compose

Even though I will add some code snippet for each element of the technical stack, the focus will be mainly given on the docker part. This means that you should be familiar with mysql, angular and nodeJS and have all dependencies already installed (node, npm, angular-cli).


Depending of your Operating system, you will need to install Docker daemon and docker-compose on your Operating System.

create a MEAN app

MEAN stands usually for MongoDB — Express — Angular — NodeJs but in this article, I prefer using Mysql instead of MongoDB.

Let’s first structure our project with the following directory architecture :

As you can see, I created a main directory called mean-docker and 3 sub-directories for each of the components of the projects. In this manner, each sub-directory will have its own Dockerfile. Let’s see what they look like.

1. database Dockerfile

In the database folder, create a file called Dockerfile and simply add the content below :

# Create mysql server based on the official image from the dockerhub
FROM mysql:5
# Add a database & root password
# optionnaly, run initial scripts for creating tables
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/

Once build and loaded as a container (that will cover later on), this will create a mysql server with the root password you defined. Also, you can create a directory called sql-scripts if you need to create at launch time initial tables and prefilled values. Here are some examples :

# CreateTable.sql
CREATE TABLE jobs (title varchar(25),description varchar(50));
# InsertData.sql
INSERT INTO jobs (title, description) VALUES ('dev', 'awesome job');

2.1 backend architecture

We will of course use nodeJs as a javascript runtime for our backend service. We will rely on Express for the web application framework and of course mysql driver for querying our mysql database. Here what the backend/package.json looks like :

"name": "backend",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node app.js"
"dependencies": {
"body-parser": "~1.18.3",
"cors": "^2.8.5",
"express": "~4.16.4",
"mysql": "^2.16.0"

Going into the backend folder and running npm install will install all the dependencies listed in package.json.

As you can see if the package.json file, if we run npm start, nodeJS will load the file called app.js that will contain our API as well as our code to interact with mysql. Here is the content of app.js :

// app.js
const express = require('express');
const path = require('path');
const http = require('http');
const bodyParser = require('body-parser');
// api.js for the routes
const api = require('.api');
const app = express();
// body parsing middleware
app.use(bodyParser.urlencoded({ extended: false }));
// all routes are falling back into api.js
app.use('/', api);
// HTTP port setting
const port = process.env.PORT || '3000';
app.set('port', port);
// HTTP server creation
const server = http.createServer(app);
// listening all incoming requests on the set port
server.listen(port, () => console.log(`backend running on port:${port}`));

For properly structuring our backend, we have separated the server configuration/loading (app.js) from the API logic (api.js). Below a very simple example of api.js that will interact with our mysql database to retrieve and expose data :

const express = require('express');
const router = express.Router();
const mysql = require('mysql');
const cors = require('cors');
var con = mysql.createConnection({
host: "database",
user: "root",
port: '3306',
password: "somePassword",
database: "mean",
charset : 'utf8'
var corsOptions = {
origin: '*',
optionsSuccessStatus: 200
// initial connection
con.connect(function(err) {
if(err) console.log(err);
// our simple get /jobs API
router.get('/jobs', cors(corsOptions), (req, res) => {
con.query("SELECT * FROM jobs", function (err, result, fields) {
if (err) res.send(err);

module.exports = router;

What is important here is that we named our database host as ‘database’. This link is in fact described later on in our docker-compose.yml file in which we will link the database service to the backend service so the backend will be able to interact as we described.

As you can see, for the example purpose, we have a single route GET /jobs that will return all the jobs stored in our mysql service.

2.2 backend Dockerfile

Now that we have build our backend, let’s simple dockerize it by firstly creating a Dockerfile :

# Create image based on the official Node 6 image from the dockerhub
FROM node
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package.json /usr/src/app
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app
# Expose the port the app runs in
# Serve the app
CMD ["npm", "start"]

Later on, when we will create our backend docker image, we don’t want to include all the local node_modules folder into the image (as it will be also created when npm install will be called by docker), so we can create a simple .dockerignore file including :


Our final backend folder content should look like :

Let’s now create an angular frontend and dockerize it.

3.1 frontend architecture

Go back to the root mean-docker folder and simply run the angular command to create an angular project :

mean-docker$ ng new frontend

This will create all the angular structure. Let’s focus simply on how to query our backend. To to so, we will create a JobRepositoryService that will have a method querying the GET /jobs API and returning asynchronously an array of Job using RxJs :

import { Injectable } from '@angular/core';
import { HttpClient, HttpHeaders } from '@angular/common/http';
import {Observable} from "rxjs/index";
import {Job} from "../models/Job";
providedIn: 'root'
export class JobRepositoryService {constructor(private http: HttpClient) { }getJobs (): Observable<Job[]> {
return this.http.get<Job[]>("http://localhost:3000/jobs");

Then, in any of our angular composent, we can inject this JobRepositoryService and subscribe to the getJobs Observable :

getJobs(): void {
.subscribe(result =>
console.log("jobs "+JSON.stringify(result));

3.2 frontend Dockerfile

The frontend Dockerfile looks very much like the backend Dockerfile as they both use a node image :

# Create image based on the official Node image from dockerhub
FROM node
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package.json /usr/src/app
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app
# Expose the port the app runs in
# Serve the app
CMD ["npm", "start"]

The only difference here is that we expose our frontend on the regular 80 port. Do not also forget to create the .dockerignore file including the following content for the same reasons :


4. Docker compose

Now we have all our services expressed with their Dockerfile, we can describe the orchestratation declaring a simple docker-compose.yml file located on the root mean-docker folder :

version: '2' # specify docker-compose version# Define the services/containers to be run
frontend: #name of the first service
build: frontend # specify the directory of the Dockerfile
- "4200:4200" # specify port forewarding
container_name: front-container
restart: always
backend: #name of the second service
build: backend # specify the directory of the Dockerfile
- "3000:3000" #specify ports forewarding
container_name: back-container
restart: always
- database # link this service to the database service
database: # name of the third service
build: database # specify the directory of the Dockerfile
container_name: database-container
restart: always

We have 3 distinct services in our MEAN stack. Each of them having a Dockerfile. So in the build node, we locate the folder where the Dockerfile is located. We also define the port forwarding. And most importantly, we create a link between the backend and the database container by specifying the links node.

Now, to create the images and launch the containers, we just have to run the following command :

$ docker-compose up --build -d

And to check the status of the containers, we have the following command :

$ docker container ps

You app is now running and you can access your frontend using localhost:4200 (if you encounter an error, you can try to enhance the ng start script from the frontend/package.json by adding the following :

"start": "ng serve --host= --disable-host-check",


As you can see, creating a dockerize MEAN stack is very easy. You simply need to structure your project in a way that each of your service has its own Dockerfile. Then, by using docker-compose, you can run your multi-container docker app and create links between services.

I’m personnaly using this orchestration approach for quickly creating MVPs in order to focus on the learning of the MVP experiment rather than focusing on scalability at this early stage.