[Utility Post] Docker for Busy People

Victor Algaze
9 min readNov 7, 2016

--

Non clickbait title: A Very Clear & Long-winded, Step-by-step Docker 101

I’m not sure exactly who you are, but I want to imagine you are a front-end-heavy occasional full-stack engineer or someone who has read Frederic Lardinois’ ‘wtf is a container’ or just someone has recently gotten the job to “Dockerize” a service. And hopefully the success of some company initiative depends on your efforts. And ideally you’ve never “really” used Docker before.

There is of course a lot to learn with Docker — downsides, limitations, mistakes to avoid, tooling, alternatives, etc. This article does not cover the important advanced, subtle stuff. This article represents the absolute bare minimum amount of information (with a dash more) you should know in order to do useful things with Docker.

Quick Note on Terminology

Getting up to speed with Docker and its myriad options and commands can be a surprisingly cerebral experience. I think the biggest challenge for some people is that Docker requires you to re-contextualize a lot of common words, especially ones you probably don’t think about much like “build” vs “create” or an “image” vs a “container”.

There are many more examples of this, but the most you need to know for now is that:

You BUILD Images

and

You CREATE/RUN/START Containers

With that in mind here’s the plan: We are going to take a nodeJS server that runs on port 8000 and put it in a container and run it on your machine exposed on port 1234. Then we’ll flip things and take a server that runs internally on port 1234 and expose it on your machine on port 8000, but tell the server to run on port 1234 by passing in an environmental variable.

If you can make it through that, you can jump into the extensive Docker docs and figure out the rest as you need it.

Prerequisites

You should have Docker installed on your machine, know your way around a terminal prompt and maybe have the Kitematic Docker management GUI tool installed (definitely not essential right this minute since we’ll be doing everything from the command line.) If you are indeed using Kitematic, you can open a shell with the correct environmental variables here:

Once you’re in a terminal session, check to make sure everything is working by entering $ docker version.

DO NOT proceed until you are able to see something roughly like this when you enter $ docker ps:

Images vs Containers: Images

If you’re following along at home (and to get the most out of this guide, it might help) clone this repo & we’ll get started:

$ git clone https://github.com/valgaze/docker-for-busy-people && cd docker-for-busy-people

Inside the repo, check the folder _tmpDockerfile and inspect the Dockerfile. A Dockerfile (see here for all the details: https://docs.docker.com/engine/reference/builder/) is a basically configuration file that you use to build Docker IMAGES.

This is what our Dockerfile looks like:

#Specify base image (can be a name fetched from here: https://hub.docker.com/explore/)
FROM node:6.3
#Run command that here happens to create a directory
RUN mkdir -p /usr/applicationSrc
#Set the working directory when started
WORKDIR /usr/applicationSrc
#Run this command when booted
CMD [ "npm", "start" ]

Nothing too interesting here. We tell Docker to base everything off Docker’s node:6.3 image (if needed, Docker will download dependencies for you), create and set a working directory, and then run $ npm start in that working directory when everything is ready.

One good gotcha/trick to know about when about building images from Dockerfiles like this is that unless you have a very good reason, you should probably only stash a Dockerfile in its own empty directory when building images from them. Otherwise when you build the image, you can accidentally vacuum up other files and sub-directories in the “build context” and send them over to Docker’s daemon, which can slow operations down considerably.

The Docker docs do in fact allude to this behavior:

Warning: Do not use your root directory, /, as the PATH as it causes the build to transfer the entire contents of your hard drive to the Docker daemon.

CD to the _tmpDockerfile file and use the Dockerfile to build a Docker IMAGE and name or “tag” it “my_image

$ cd _tmpDockerfile && docker build -t my_image .

You can use the following command to see all the images on your machine and verify that “my_image” exists:

$ docker images
Looks good!

Note that you can interchangeably reference Docker images by the name/tag you assign to them or by a universal “ID”

Images vs Containers: Containers

Now that we’ve built our image and verified it exists, it’s now time for us to CREATE our container. In Docker you use images as the starting point for containers.

To create a container, you need to use the $ docker create command. Without any superfluous options, the $ docker create command looks like this:

$ docker create --name {container_name} {image_name}

To add in extra configuration to the container, you will need to provide some options to $ docker create. If you dive right in and consult the $ docker create documentation this is one of the first things you will see:

That’s A LOT of CLI options…

There’s good news and there’s great news here: The good news is that Docker by necessity needs to handle many corner cases, so there’s a good chance that if you can think of it, Docker will have a flag for it. The great news is that you do not need to worry about all those flags and options right now. For our purposes today, we will only worry about PORTS (-p) and VOLUMES (-v).

PORTS

The ports flag is set with -p (short for — ports) and follows the structure

-p {EXTERNAL_PORT}:{INTERNAL_PORT}

ex.

-p 1234:8000 #port 1234 on the host machine, 8000 "in" the container

The ports can be whatever number range you want, but just make sure that the external port is in fact open and available, otherwise your container will not start. You can have multiple port mappings on a single container.

VOLUMES

The volume flag is set with -v (short for --volume) and specifies where to mount a directory or “data volume” on the host (in this case your machine) onto the container. Note that when done this way, the data in the volume stays on the host, it’s not “copied” into the container and it doesn’t stay there. (Ex. If you mount a directory from your machine into a container and edit one of its files on the container, after shutting down the container the fill will be changed on your filesystem)

The volume flag takes the form:

-v {ABSOLUTE_PATH_HOST_MACHINE}:{ABSOLUTE_PATH_INSIDE_CONTAINER}

ex.

$ -v /Users/victor/docker-for-busy-people:/usr/applicationSrc

You can have multiple volume mounts in a container.

Creation

Putting it all together, here is how we create a container named “my_container_1” which mounts a volume and maps port 8000 to port 1234 on the host machine:

$ docker create --name my_container_1 -p 1234:8000 -v /Users/your_absolute_path/docker-for-busy-people:/usr/applicationSrc my_image

Verify the container exists with the $ docker ps command with the -a flag:

$ docker ps -a 

Note the use of the -a flag which will include containers like my_container_1 that has been CREATED but not yet STARTED. When you CREATE a container you need to take a discrete step and START it.

Since everything should be ready to go let’s start the container:

$ docker start my_container_1

You can take a look at the logs of the container with $ docker logs my_container_1

Viewing Dockerized Apps in the Browser

Now you should be able to visit localhost:8000 and see your magnificent container, go ahead and skip to “Running my_container_2

localhost:8000 is blank!

Note: If the container is running and localhost:8000 is blank, you might be using an older version of Docker. The details aren’t especially important, but the immediate implication is that there a actually Linux VM running Docker and accordingly your containers are not accessible on localhost.

If you cannot access the app in your browser on on localhost:8000, use Kitematic or type the following into the console to discover the ip address:

$ docker-machine ip # Will be something like 192.168.99.100

From there, try to visit that ip address on port 8000, ex 192.168.99.100:8000. If all went well you should see something like this:

Running my_container_2

For our last trick we will make another container that runs our web app but this time, we will pass it an environmental variable to instruct the server is listening for to run on port 1234. We will then expose it on the host machine on port 8000.

Environmental Variables

Docker allows you pass in a virtually unlimited number of environmental variables (or even a reference to a file of ENVs) to your containers. Environmental variable flags can take the form:

--env NAME=VAL--env SPECIAL_PORT=1234

We will use $ docker run instead of docker create to instantly create and then START the container in a single step but all the config we are about to use could just as easily work with $ docker create:

$ docker run --name my_container_2 --env SPECIAL_PORT=1234 -p 8000:1234 -v /Users/YOUR_ABSOLUTE_PATH/docker-for-busy-people:/usr/applicationSrc my_image

If you visit your Docker host IP address (depending on your setup, either localhost or something like 192.168.99.100) on port 1234 you should see this:

Wrap up & clean up

That’s pretty much all you need to know to get started doing something useful with Docker. As special-snowflake situations arise, check the Docker docs.

Before you go, two last handy utility commands:

  1. Kill all the containers on your machine (meaning kill them ALL completely):
$ docker stop $(docker ps -q) && docker rm $(docker ps -aq)

2 . After you run the command above to kill all the containers, use this to kill all your images

$ docker rmi $(docker images -q)

One Last New Addition…docker-compose

If you’re using the latest/greatest version of Docker, its docker-compose tool should already be avaialble on your system. Check your version with:

$ docker-compose -v

If docker compose is unavailable for some reason, follow the guide here: https://docs.docker.com/compose/install/#install-compose

Docker compose is a tool which essentially lets you build multiple containers using a single configuration file called docker-compose.yml (A full guide to docker-compose.yml is available here) Docker-compose will create custom names for your containers based on the directory in which they’re located.

From the repo you cloned down (https://github.com/valgaze/docker-for-busy-people) move into the “compose-example” directory and run the following command to start up (we’ll explain what’s happening in a moment):

$ docker-compose up

Our single docker-compose.yml looks like this:

version: '2'
services:
server1:
build: ./server1
ports:
- 8000:8000
environment:
- PORT=8000
volumes:
- ./server1:/usr/app
server2:
build: ./server2
ports:
- 8001:8001
environment:
- PORT=8001
volumes:
- ./server2:/usr/app

From a single docker-compose.yml file we:

  • Create two services one named ‘server1’ and ‘server2’
  • Server1 is mapped to port 8000 and retrieves data from server2
  • Server2 is mapped to port 8001 and generates our data
  • Volumes and environmental variables are just like we did above

Important: Inside a container, “localhost” refers to the container’s localhost and not your system’s localhost. Thanks to docker compose, we can simply refer to http://server2, (see here for full context):

const requestHandler = (request, response) => {  
/*
Note: We're referring to 'server2' below
since 'localhost' would refer to the container
housing server1 (this server)
*/
retrieve('http://server2:8001').then((output) => {
console.log('Server1', output);
response.end('From other server:' + output);
});
}

When you run docker-compose up, it parses the docker-compose.yml, creates the two containers (server1 and server2) and a network for both of them for them to exchange data.

This is a simple toy example but is a good starting point for starting multiple containers and exchanging data between the two. You can destroy the containers with the following:

$ docker-compose down

--

--