Docker is my {I.D.E}

from a developer’s perspective

will.streeter
Published in
18 min readJun 12, 2017

--

In this post, I will describe how the Docker eco-system can be used to project an Integrated Development Environment (IDE). For readers, whose position in a software shop is that of Development and Operations (DevOps), please except my apologies for the brief definition of Docker and perhaps my tardiness with respect to the latest features. I am writing this from the perspective of FullStack developer, whose focus is primarily getting front-end applications , back-end applications , and various resources connected to facilitate code development. I will begin by using the Dockerfile in ws-ngx-login-demo GitHub repository to provide an example for analyzing the configuration and reviewing the commands use to build and run it as a container. By the end of the article, I will demonstrate how Docker Compose is used in the ws-dev-docker-example GitHub repository to bundle multiple containers.

As delineated in Practical Web Development and Architecture, the introduction posting to this series, one of the most consequential aspects of on-boarding developers is the amount of time it takes new developers to get an IDE in place for interacting with code. The progression of this initial introduction to a development team has improved greatly thanks to Docker; the current manifestation of an initiative started some twenty years ago and promoted by companies like Google around the concept Operating-system-level virtualization. Offering an implementation of this concept, Docker provides a machine (docker-machine) to install an engine (docker-engine) on host operating system, which enables the host to share its kernel with isolated containers (docker containers). Docker containers are the destination of an application like NodeJS and the various libraries need to support installation as well as run-time resources.

The orchestration and scaffolding of Docker enables a local client ecosystem to be deployed to a production system with few modifications to the configuration of the local docker containers. Rather than drilling down in to specifics about what makes this possible, I am going to focus on implementing an ecosystem for development purposes on a client machine (MAC).

Itinerary

Docker Set Up: summarize Docker installation and briefly explain a few configuration commands used in Docker file. Using Docker commands to build an image and run the image as a container.

Helpful tips for debugging docker : briefly describe several ways of using docker-commands to debug the docker eco-system.

Multiple Container Orchestration: demonstrate how a docker-compose file is configure to aggregate multiple containers running simultaneously.

As mention in the beginning of this post, this article was written to support an example of developing a FullStack application using NodeJs and Angular X (4.1.0 at the time of this writing). Each of the applications have their own Docker container. The entire set up of the environment employs a container of NGINX as well as a container for Mongo. For the purpose of this article, I will mostly focus on how, docker is configured and utilized for building the front-end Angular client.

Docker Set Up

(The proceeding instructions are based on developing with Docker on a Mac.)

To set up Docker on your client machine, visit the Docker site and go to the community edition page where you can download the appropriate version for your machine’s operating system. Upon your selection, you will have the choice of the ‘stable’ or ‘edge’. I suggest using the stable version. Information provided on the page will show you how to verify installation.

From Docker Images to a Docker Container:

Docker containers are built with Docker images, which is the case with the ws-ngx-login-demo that will be used to demonstrate this Docker based development. The images are publicly accessible on the Docker-Hub registry site but can be reviewed in my GitHub repositories as well.

Those familiar with developing front-end applications will recognize that NodeJs needs to be used in a development environment for task running and to express a view of the application in a browser. There is often a need for libraries and SDKs to be installed in the actual operating system as well, and our Docker eco-system should be able to accommodate these needs. While those running the Docker container for ws-ngx-login-demo do not have to worry about how and what other images are accumulated to create the final image, offering insight into the process will serve as a good start to understanding how the Docker system functions.

As NodeJs is a requirement, I begin with an image from Michael Hart, which can be found at mhart/alpine-node GitHub repo. Hart seems to keep relatively update with the latest NodeJs builds involving the Alpine linux distribution.

  • ws-node-alpine: Use the the mhart/alpine image as a base in the Dockerfile, it will be customized by adding libraries not included with mhart/alpine. This image will be used in various other containers utilizing NodeJs.
  • ws-typescript-devolpment: utilizing the base image produced by the ws-node-alpine, and adding several libraries to be globally installed such as the TypeScript compiler (tsc), Gulp, Yarn,Typings, and TSlint will form the base image of ws-ngx-login-demo. Creating a new image this way, allows ws-node-alpine the possibility of being used by other containers that do not need these global libraries.
  • ws-ngx-login-demo: contains our Dockerfile of concern, using the base image produced by ws-typescript-development, which is the accumulation of the previous images. Here is where the configuration of the front-end client will be implemented.

Some may think ws-typescript-devolpment is an unnecessary step adding bloat, since TypeScript, Gulp and the other libraries can be globally installed on the client machine and then orchestrated with Docker to mount the results of task running. However, I have found it saves cycles by freeing up the dependency on developers to accurately set up client machines, ensuring global libraries are accessible with proper permissions. Even if the resulting image is large during development, it will not be the same image used in production.

ws-ngrx-login-rep + Dockerfile.dev

While it is typical to find just one Dockerfile in a repositories, I have become accustom to creating Dockerfile.dev for development and a Dockerfile for production. A Dockerfile for production does not currently exits, as the intention is to demonstrate a development practice with Docker.

Dockerfile.dev break down:

FROM willsonic/ws-ngx-development:v0.0.1

Informs the Docker instance what image will be used to build the current effort.

ENV UI_CLIENT_PORT 5555
ENV LIVERELOAD_UI_CLIENT_PORT 3001

EXPOSE $UI_CLIENT_PORT
EXPOSE $LIVERELOAD_UI_CLIENT_PORT

Here we set up some environments variables and request the Docker container make certain ports available to the outside world.

  • UI_CLIENT_PORT: will provide the port used to view our application on client machine browser. http://localhost:5555
  • LIVERELOAD_UI_CLIENT_PORT: will provide the BrowserSync Port http://localhost:3001 . Part of the BrowserSync implementation incorporates the ability for LiveReload in the browser. (as side note, BrowserSync is used to serve the html, css, images assets, and JavaScript files performing the duties which will be the responsibility of NGINX in a production environment)

It is important to note that ENV variables are used in statements, which are executed while building an image. This means during run time when the image is used as a container, these variables will be of no use.

RUN mkdir -p /app/dist \
&& mkdir -p /app/tools \
&& mkdir -p /app/resources \
&& mkdir -p /app/node_modules \
&& mkdir -p /app/src

RUN is used to execute shell commands. Directories are created with mkdir command and will be preserved in the Docker container after completing the build process.

COPY test-config.js /app/test-config.js
COPY test-main.js /app/test-main.js
COPY karma.conf.js /app/karma.conf.js
COPY protractor.conf.js /app/protractor.conf.js
COPY tslint.json /app/tslint.json
COPY package.json /app/package.json
COPY tsconfig.json /app/tsconfig.json
COPY tools /app/tools
COPY gulpfile.ts /app/gulpfile.ts
COPY src /app/src

COPY command copies files and directories from the client into the Docker image we are building. Without creating these directories in the previous RUN command, the files and directories created by the build process would not be present when running the image. The process of the COPY command facilitates mounting directories and files enabling changes to them on the client machine to propagate to the Docker container.

WORKDIR /app

This command establishes a directory that will serve as the root directory of the Docker container.

RUN npm install && gulp build.bundle.rxjs

This command applies the ubiquitous ‘npm install’; the first step necessary to install packages needed to develop and run client side JavaScript frameworks. One of the critical task that also needs to be completed during set up of development environment is the placement of the ReactiveX Rxjs library. This is specific to our client application and the need for SystemJS to become aware of the the location of the Rxjs package. ( the double ampersands (&&) are used to chain RUN commands)

CMD ["npm",  "start"]
#ENTRYPOINT ["/bin/sh", "-c", "while true; do sleep 1; done"]

The CMD command executes the package.json script

"start": "gulp serve.dev --color",

The ENTRYPOINT command is currently commented out with the prefix “#”. The usage of this command will be explained later, when debugging methods are described.

A more detail explanation of these build process as well as other commands not used here can be found on the Docker commands reference page.

Building and Running:

Before we can run the images as a container, we must create an image. When we installed the Docker client, a command line interface (cli) was also installed. Execute the following command from the root of ws-ngx-login-demo-dev to build an image.

$ docker build -t local-ws-ngx-login-demo-dev -f Dockerfile.dev .
  • -t : uses ‘local-ws-ngx-login-demo-dev’ as a tag name for the image we are building.( Since we could eventually have many images on our machine, possibly ws-ngx-login-demo-prod for production or staging etc, I like to create a tag (-t) name that accurately describes the image purpose. )
  • -f : points to the Dockerfile used to build the image (Without the file (-f) option, Docker would assume the file name is the default Dockerfile . Since the production version of the file would be different, I have create a Dockerfile.dev. )
  • . : the dot signifies that the image will be loaded into the registry on our client machine rather than a remote repository.

After running the build command, we should eventually see a message suggesting the positive or negative result of the build execution.

...
---> 6e0d6dee5cbf
Removing intermediate container 8b8ae8cff7e1
Step 19/19 : CMD npm start
---> Running in 52442749c401
---> bf87702e0178
Removing intermediate container 52442749c401
Successfully built bf87702e0178
$

( note: it may take several minutes depending the speed of your connection, as we must download the other base images used by the ws-ngx-login-demo as well the current libraries listed in the packag.json)

Executing the following command ‘ docker images ‘, we can see the resulting images we have created:

output from docker images

Now that the images necessary for running ws-ngx-login-demo have been constructed, we are prepared to run the image as a container. Before I run the image as a container, I will issue the ‘docker ps -a’ command to see what containers are currently running.

Before run containers..

The output demonstrates there are currently no containers running. This will change when we execute the following command:

$ docker run --name ws-ngx-login-demo-container  -d -p 5555:5555  local-ws-ngx-login-demo-dev

using the following options:

  • — name : a label to provide an identification of the container
  • -d : enables the container to stop running or exit when the process that start the container ceases.
  • -p: represent the ports we will use to access the container, which is also the port that the NodeJS will use to allow access to the web-server.
  • local-ws-ngix-login-demo-dev : the name of the image we are running as a container.

More information regarding the execution of the ‘docker run’ command can be found on the Docker website. When we execute the command ‘docker ps -a’, we should see the following container running. While the CONTAINER_ID ( e2f045a6ae86 ) may be different, the NAMES ( ws-ngx-login-demo-container ) should be the same.

After executing command to run the image as a container.

Now if you open the browser to ‘ http://localhost:5555 ’ the web application should appear.

ws-ngx-login-demo browser view

While the container is up an running, the LiveReload is not accessible. We are unable to get the browser to reload when we make changes to our code. There are different ways to accomplish this task. We will use a docker-compose.yml.

First stop the current container and then remove the container.

$ docker stop ws-ngx-login-demo-container && docker rm ws-ngx-login-demo-container

When that is complete, run the container by executing the following command.

$ docker-compose up

This will enable us to run the application using the package.json ‘npm run start’ command, which executes our gulp task. Now we get the following

  • application logging output
  • LiveReload facilitates the restarting of the application after making and saving changes to our code.
  • access to the BrowserSync portal

We can stop the container by simply pressing ‘Ctrl + C’ in the terminal, however the container will still be present. To both stop and remove the container, we either issues the previous ‘stop and rm(remove)’ or just execute the command:

$ docker-compose down

This command can be executed in the terminal after ‘Crtl + C’ or by opening another terminal and executing ‘docker-compose down’. ( I usually have 2 terminals open, one for bringing up a bundle of containers and another for taking them down.)

Before expounding on how docker-compose is used to orchestrate the entire suite of applications for developing on a FullStack, I will review some process I use for debugging Docker creation when my efforts go astray.

Helpful tips for debugging Docker

Docker Image Building: I find it is necessary to remove images before rebuilding them.

Above is a view of my terminal after executing ‘docker images’. The ‘<none>’ under REPOSITORY is the output of an image that is broken or did not complete the process of building. We should remove the image by using the IMAGE ID ‘2d2fd0396d56 ‘ and executing the following command.

$ docker rmi 2d2fd0396d56

Sometimes we may find the ‘rmi’ command will not respect our wishes and we need to prepend the image id with (‘ -f ’) a force option ‘docker rmi -f 2d2fd0396d56’. At other times we may be left with images that are impervious to our request, which are said to be ‘dangling’. To remove these images we can execute

$ docker rmi $( docker images -q -f dangling=true)

Docker Container running: I have often found that if things go awry it will happen during this process. The first and most obvious choice is to simply review the log output displayed when we execute ‘docker-compose up’. If log output did not appear when executing the compose command or you want the logs from a specific container, try using the ‘docker logs command’

$ docker logs [container id or name] 

If the logs are not accessible or offer no help, I have found the easiest way to figure out what is happening is to rebuild the image after changing the Docker files final CMD or RUN command with an ENTRYPOINTcommand. This will enable standing up the container in a stasis mode. When we execute the ‘docker run’ command and it fails to start or exits immediately, we need to run the container without executing a final CMD or RUN command, which is more often than not the culprit of the failures. As I mentioned earlier, the Dockerfile.dev of this project, has a commented out line:

ENTRYPOINT ["/bin/sh", "-c", "while true; do sleep 1; done"]

This command will get the container running, in place of the CMD allowing us to access it and start investigating what may be breaking or not working correctly in our set up. To use this command, take the following steps:

1- Stop and Remove all the offending container

docker stop [container id or name] && docker rm [container id or name]

2- When the offending container is no longer present, remove the image that was used to create the container by using the ‘docker rmi ‘ command. Once the image has been successfully purged, update the Docker file commenting out the CMD and uncommenting or inserting the ENTRYPOINT command with the instructions below

# Dockerfile.dev
...
#CMD ["npm", "start"]
ENTRYPOINT ["/bin/sh", "-c", "while true; do sleep 1; done"]

3- After updating the Docker file, execute the build command to create the image. Once the image has been created, use the run command we initially executed to start the container, not the docker-compose up command.

$ docker run --name ws-ngx-login-demo-container  -d -p 5555:5555  local-ws-ngx-login-demo-dev

4- To make sure the container has started, execute the ‘docker ps -a’ command and view the STATUS of the container. It should not say ‘exited ….’ but rather ‘Up ….’ . Now that the container is up, we can issue the following command to ‘ssh’ or way into the container, where we can start to do a better analysis of missing files or directories errors, etc.

$ docker exec -it ws-ngx-login-demo-container sh

Short Cuts:

I have found it is often a good idea to create Makefile of the Docker execution commands. Not only does this help in cutting down incorrect inputs and constantly searching history for previously used commands, but when working with a team who may not be as familiar with the Docker paradigm it can be a tremendous help.

//Makefile for ws-ngx-login-demo
all:

CONTAINER_NAME = ws-ngx-login-demo-container
IMAGE_NAME = local-ws-ngx-login-demo-dev


build-dev:
docker build -t $(IMAGE_NAME) -f Dockerfile.dev .

run-container:
docker run --name $(CONTAINER_NAME) -d -p 5555:5555 $(IMAGE_NAME)

start:
docker start $(CONTAINER_NAME)

stop:
docker stop $(CONTAINER_NAME)

rm:
docker rm $(CONTAINER_NAME)

up:
docker-compose up

down:
docker-compose down

logs:
docker logs -f $(CONTAINER_NAME)

Multiple Container Orchestration:

This article is part of a series written to demonstrate an approach to FullStack development, Practical Web Development and Architecture, which can facilitate optimal output with respect to both a scalable architecture as well as the coordination of efforts among the members of a development team.

Thus far, I covered building a single image for running a single container,using ws-ngx-login-demo as an example. I also demonstrated how to use Docker commands ‘docker-compose’ to run the container, while performing code development to instantly propagate the results of your efforts. In this section we will expand on how ‘docker-compose’ can be used to orchestrate development of several containers representing a FullStack. To this end, the bundling of the various containers can be found in the ws-dev-docker-example.

The bundle includes 4 GitHub repositories, which have been added as submodules.

  • ws-ngx-login-demo : an implementation of a front-end client built with TypeScript. A detail description of the architecture can be reviewed in the article, Optimal Angular : PubSub With NGRx.
  • ws-node-demo : an implementation of back-end web-server using NodeJS, written with TypeScript. A detail description of the architecture can be reviewed in the article, Swagger, NodeJS, & TypeScript : TSOA.
  • ws-mongo-demo : a Dockerfile used to containerized instance of a Mongo Database used by ws-node-demo to store and retrieve data.
  • ws-nginx-demo : contains an implementation of a NGINX web-server, providing a representation of how a FullStack application would be accessed in production. The name of each ‘service’ in the docker-compose file is proxied as a host in the nginx.conf file.
root directory for ws-dev-docker-exmaple and partial docker-compose.yml

The docker-compose.yml defines how multiple containers can be orchestrated to perform in unison with each other. For the purposes of this series and the example application provided, I have only using the Docker Compose process for building an integrated development environment. However, the entire ecosystem of the docker-compose.yml file and associated Docker files in each repo can be repurposed and packaged as a production deployable bundle.

dock-compose.yml break down:

As a YAML file type, docker-compose files must adhere to the YAML syntax rules, such as new line and tabs. The ws-dev-docker-example/docker-compose.yml declares Docker Compose format version3’, which associates the process of the file to a specific version of the Docker Engine.

version: '3'
services:
mongo:
...
ws-node-demo:
...
ws-ngx-login-demo:
...
ws-nginx-demo:
...

version: directs the docker-compose file to be process in association with a specific version of the Docker Engine.

services : serves as a list in YAML parlance will be used to define each of the containers that will become a service. Services is 1 of 3 possibilities to create top level Docker Compose list ( services, networks, and volumes).

mongo , ws-node-demo, ws-ngx-login-demo , ws-nginx-demo: represent a second level YAML list of titles. Each perspective title in the list is used to express a set of definitions that will be applied to each instance of the associated container.

//contents of the docker-compose service node for ws-ngx-login-demows-ngx-login-demo:
restart:
always
container_name: ws-ngx-login-demo-container
build:
context:
./ws-ngx-login-demo
dockerfile: ./Dockerfile.dev
image: local-ws-ngx-login-demo-dev:latest
command: ['npm', 'run', 'start']
volumes:
- ./ws-ngx-login-demo/dist:/app/dist
- ./ws-ngx-login-demo/tools:/app/tools
- ./ws-ngx-login-demo/src:/app/src
- ./ws-ngx-login-demo/yarn.lock:/app/yarn.lock:rw
- ./ws-ngx-login-demo/tsconfig.json:/app/tsconfig.json
- ./ws-ngx-login-demo/package.json:/app/package.json:rw
- ./ws-ngx-login-demo/gulpfile.ts:/app/gulpfile.ts:rw
links:
- "ws-node-demo"
ports:
- '5555:5555'
- '3001:3001'

restart : option specifies an action for restarting the container when it stops running due to a failure. One such scenario would be if a NodeJS application fails to instantiate when a reference to database is undefined because the container has not completed instantiation.

container_name: allows for the creation of a custom name other than the default.

build: provides a list of custom configuration to create a Docker image before running. While we may use the commands previously described in the Building and Running to facilitate the creation of images during initial development, it is easier to enable the entire process with just the ‘docker-compose up’ command once images and containers seem to be accurately standing up.

image: denotes the name and version of the Docker image made from the build process or one that is already created and accessible that can be used to facilitate creating the container.

command: provides the same behavior as the CMD option in Dockerfiles. However, when it is part of the docker-compose configuration it will override the CMD option in a Dockerfile.

volumes: allows for the creation of a key pair list of files or directories to be mounted on the host path. While the volumes option in the DockerFile performs the same operation, the retention of the files and directories is different. Previously, described in the configuration of a Dockerfile, the RUN option was used to make directories that would retain the mounted data even after the image was constructed. If we had not done this step, our data would not be present during the ‘running’ of a container. Using the volumes options with docker-compose, mounting data can actually be placed in the directories created during the build process without the COPY command in Dockerfiles.

links: option enables the creation of a list of dependencies that the current container depends on, and thus, offering the capability of determining the order in which services are created. Just as the services names are used in the nginx.conf as proxied hostnames to identify a container, so to does the links option offer service name recognition.

ports: option allows for defining ports. If the definition of the port does not include the second port, as in ‘first:second” (HOST:CONTAINER), the container will be supplied with a default port.

While not listed in the ws-ngx-login-demo services configuration above, another useful option is the environment option, which can be used to provide environment settings directly available in a container.

//docker-compose
environment:
DB_USER: baby
DB_PASSWORD: indacorner
//ws-node-demo some database adapter.ts
process.env.DB_USER
process.env.DB_PASSWORD

The option args is also quite useful for specifying values for ENV in a Dockerfile during the build process.

//docker-compose
build:
args:
MONGODB_VERSION: 3.2.8
MONGODB_PORT: 27017
DB_STORAGE_ENGINE: mmapv1
DB_JOURNALING: nojournal
DB_MOUNTPOINT: /mongodb/data
//DockerFile for MongoCMD rm /mongodb/data/mongod.lock || true && \
/usr/bin/mongod \
--dbpath $DB_MOUNTPOINT \
--port $MONGODB_PORT \
--storageEngine $DB_STORAGE_ENGINE \
--$DB_JOURNALING

I have only described a few of the options that I frequently used. There a lot more options and recipes for working with Dockerfiles and Docker Compose. If you are new to Docker and are just getting started, hopefully this article and the other articles that make up this series can help you get started.

--

--

Willie Streeter
will.streeter

I am a builder and building is my passion. I have spent the majority of my professional career expressing this passion through the medium of web development.