Making right things using Docker

Evheniy Bystrov
HackerNoon.com
11 min readOct 23, 2017

--

In this article I want to show how to use docker for development and testing. To show that now is time to switch from development to engineering, from single stack to full stack. And of course full stack is not only frontend and backend, it’s environment too. And docker is a great tool for this stuff.

And some thoughts that in near future full stack will contain and machine learning. I’ll show how easy use docker in this area.

Docker philosophy

Docker container is an open source software development platform. Its main benefit is to package applications in “containers,” allowing them to be portable among any system running the Linux operating system (OS).

Think of a Docker container as another form of virtualization. Virtual Machines (VM) allow a piece of hardware to be split up into different VMs — or virtualized — so that the hardware power can be shared among different users and appear as separate servers or machines. Docker containers virtualize the OS, splitting it up into virtualized compartments to run container applications.

This approach allows pieces of code to be put into smaller, easily transportable pieces that can run anywhere Linux is running. It’s a way to make applications even more distributed, and strip them down into specific functions.

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.

After docker installation it works using command line:

There are a lot of parameters and options. We will work mostly with build, images, run, exec, rm and rmi.

Dockerizing node.js app using YEPS

You can start from official node documentation article: Dockerizing a Node.js web app.

To work with docker container you need to get image from docker hub or create own image using Dockerfile and docker build command. Let’s create our own image using node.js and YEPS framework.

I created repository on github so you can get source:

Dockerfile:

To create own image you need to extend it from existing image using FROM. This image extends latest version of origin node image on docker hub.

After we need to create working directory in container. Working with node.js there is a good practice copy package.json and install all dependencies before copy all other files. So I copy it and run npm install command to get all dependencies. After I copy all files to container.

EXPOSE command makes docker container listen port. And CMD command runs our server.

There can be only one CMD command. It’s the philosophy of docker — one process per container.

After building we can store our image on docker hub or own private image repository. But I won’t describe it in this article.

To build it run:

All examples and commands how to work with this image I described in README.md file. Option -t creates name for image, in our case it’s yeps.

To run container:

Option -p maps port from host machine to container port and option -d runs it as a service. Open http://localhost:3000/ and see working node application.

There are a lot of commands to stop it. The simple is docker stop <conteinerID>. To check container id run docker ps -a. If you run docker image ls you can find images: node and yeps. And to delete it use docker image rm. But there is other way to top and remove image:

Interactive mode

Docker helps to work with images without building own image. On docker hub you can find a lot of interesting official and non official images. You can extend it for making own new image or just run it for some reasons.

And one interesting example is testing. On official docker hub page for node.js there are a lot of images for different node.js versions. It can be useful for testing. For example I’ll show how to test any node.js application. Let’s try it for YEPS framework. First we should get code from github:

After we need to run npm test command using any node version like this:

Here we run docker image using node:8 image (I use latest 8 version, you can specify any other version). -it parameter helps to run it in interactive mode, and to clean all data after finish we put — rm parameter. How to clean disc from old containers and images I’ll describe next in tutorial.

Command -v helps to map current directory to /www and -w command is analog of cd command (change directory), it helps to run our commands in this directory.

And we run our node.js commands node -v && npm -v && npm i && npm t using /bin/bash with flag -c.

If you need to run the same commands using node.js 7 just change docker image of node:

All node images from official repository you can find on docker hub. But as a good practice if you see that there are images based on alpine linux it’s better to use them to spend less disc space. So if we need to test our app using latest node version just run (in alpine linux we need to use /bin/sh instead of bash):

Database as a service

For testing YEPS packages with different databases I use docker. For example yeps-redis. For running and stopping tests I added command to script section of package.json:

The same for yeps-mysql:

Here using -e option I set environment variables like user and password.

And for yeps-mongoose:

As I use TravisCI for testing it’s easy to use docker as a service because TravisCI based on docker. Just register repository and create .travis.yml like I made it for yeps-mongoose:

If you need to work with private CI services you can use Jenkins or TeamCity. Jenkins has official repository on docker hub where you can find documentation how to run it. For example:

For getting access to admin UI you need to get admin password. As we run our container with -d option we can get it only from /var/jenkins_home/secrets/initialAdminPassword:

Here I use exec command. It helps to run command like “cat /var/jenkins_home/secrets/initialAdminPassword” in running container.

And command “docker rm -f jenkins” will stop it.

Almost the same for TeamCity. But here we need to run main process (server) and build agents in separate containers.

But first we need to create directory where we can store our configs and logs:

To start TeamCity server just run:

And almost the same for build agents (in free version we can work only with 3 build agents):

I specified different names and directories for each agent. As you can see I mapped directories from server and agents to local directory teamcity.

Docker compose

Docker is a good thing if you need build and run single image. But in most real apps you need to work with different stuff at the same time. Databases, instances in cluster mode, microservices… And Docker compose is perfect tool for this.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.

Compose has commands for managing the whole lifecycle of your application:

  • Start, stop, and rebuild services
  • View the status of running services
  • Stream the log output of running services
  • Run a one-off command on a service

Let’s create compose version for our TeamCity cluster — docker-compose.yml:

We use almost the same parameters (ports, images, volumes) but with some compose specific updates.

To start compose just run docker-compose up or run it as a service:

And to stop:

To bring everything down (with volumes):

Or if you want to remove docker images:

Let’s update our previous node.js YEPS example with cluster of node.js instances and nginx as a load balancer.

I created a github repository, so just clone it:

In this example we have docker-compose.yml and two directories: nginx and node. You can open links and check each file. I use the same idea like in a previous compose example. But this time I build own image for nginx:

and for node I use existing Dockerfile.

Docker and Docker compose are really helping with development and testing in real modern applications even using microservice architecture. But you can use it not only for development. Next I’ll show you how to use it for data science experiments.

Machine learning

The same idea is useful not only for development I mean computer science. It useful and for data science. And running machine learning experiments in docker container is a good too.

For my experiments in machine learning I created github repository with docker image based on python anaconda.

To work with this container just clone git repository and build image:

As I work a lot with node.js and npm I created commands in package.json script section and described it in README.md file.

So for building just run npm run build or “docker build -t python .”. And to start: npm start or “docker run — name python -p 8888:8888 -v $PWD/python:/opt/notebooks -d python”. Here I use parameter -d for demonizing process and -v for mapping current directory and save all my data after stopping container.

And open http://localhost:8888 with password root.

You can find some examples using web UI of jupyter notebook and run any for example plot_face_recognition:

Docker helps to work with the same environment (python, scikit-learn, SkPy…) in any place. After finishing you need to stop container using npm run stop and clean disc space using npm run rm.

Cleaning

Docker makes it easy to wrap your applications and services in containers so you can run them anywhere. As you work with Docker, however, it’s also easy to accumulate an excessive number of unused images, containers, and data volumes that clutter the output and consume disk space.

Docker doesn’t provide direct cleanup commands, but it does give you all the tools you need to clean up your system from the command line. In this tutorial you can find a quick reference to commands that are useful for freeing disk space and keeping your system organized by removing unused Docker images, containers, and volumes.

Some useful commands I’m going to provide here:

List of all images:

Running containers:

One line to stop and remove all containers:

And remove all images:

To delete all dangling volumes:

Conclusion

In this tutorial I shown some useful examples and commands for working with docker. There are many other combinations and flags that can be used with each. To get more information you can read docker documentation and I recommend to finish Udemy course.

Docker can help with researching and testing of new tools, databases, machine learning and big data tools like Hadoop and Apache Spark. You can run it locally and keep your PC clean after stopping.

As I said before docker is a great tool for development and testing. On production you will use devops help for configuration web services like AWS. So with docker you can use any thing now and let’s devops care about production.

--

--

HackerNoon.com
HackerNoon.com

Published in HackerNoon.com

Elijah McClain, George Floyd, Eric Garner, Breonna Taylor, Ahmaud Arbery, Michael Brown, Oscar Grant, Atatiana Jefferson, Tamir Rice, Bettie Jones, Botham Jean

Evheniy Bystrov
Evheniy Bystrov

Written by Evheniy Bystrov

I can help with IT infrastructure (AWS), apps (Node.js + React) and teams.

Responses (2)