Servers by pierrotcarre

Getting started with Docker

A guide to using Docker for the first time.

This guide was origionally written as internal documentation, before Docker Community Edition for Windows and Mac was published. Docker Community Edition is the easiest and most robust way to run Docker on Windows or Mac, as it does most of the setup work for you.

This guide is useful for background on how to interact with Docker and for debugging when things go wrong. The original documentation has been updated to reflect where you can go ahead and skip sections as Docker Community Edition does the initial setup and configuration for you.

Docker lets you deploy your code inside a virtual machine, so when it’s deployed it will have all the dependancies your software needs — whether that’s a specific version of PHP, Java, Node.js, etc. or a custom server configuration.

This is based on an internal guide I wrote for I thought it might be useful for people who’d like a quick guide to getting started with Docker as good documentation for getting started was hard to find at the time.


If you have installed Docker Community Edition for Mac or Windows (recommended) you can skip this Pre-requisites section and go straight to ‘Building and running a Docker Image’.

The command line utility ‘docker’ you used to create and run Docker images is ‘Docker Engine’. This is the interface people are usually referring to when they talk about Docker.

The command line utility ‘docker-machine’ is the ‘Docker Machine’, it’s used to configure virtual machines to run those images and to manage networking between the virtual machines and your computer, so you can access containers from your computer.

If you have installed Docker Community Edition installing and setting up docker-machine is done for you when docker is installed, and you can skip these docker installation steps.

Docker Community Edition is the most reliable way to run Docker on Mac or Windows, but if you can’t install it or don’t want to then the steps below may be useful.

Installing docker-machine and virtual box on a Mac

If you are running Docker a Mac, you’ll also need “virtualbox” software to run virtual machines.

You can install both docker-machine and virtualbox using Homebrew:

brew cask install virtualbox
brew install docker docker-machine

Setting up docker-machine for the first time

Important! These steps will not work — and do not apply — if you have installed Docker using the Docker Community Edition application from If you have, you should skip this step.

1. Make sure you have a default image.

You need to create a “default image” with virtualbox when running Docker on a mac for the very first time. This should only take a minute or two.

If running Docker for the first time on a machine you will need to create a default image. You can check for one with:

docker-machine ls

If no default image is listed you can create one with like this:

docker-machine create default

If you get any errors, you should resolve them before proceeding.

If it all seems to be working, run the following command to export shell variables that point to your default machine:

eval "$(docker-machine env default)"

Note: It assumes the bash shell, which is usually the default.

TIP: You can put this command in your .bash_profile to avoid having to enter it every time you want to interact with docker.

Building and running a Docker image

How to buld and run a Docker image.

1. Create a Dockerfile

A Dockerfile is just a file called “Dockerfile” that lists instructions that tell Docker how to create a virtual machine.

Usually it starts with a directive of the name of a pre-made image such as a plain linux distribution or an official package from a publisher (the Node.js team maintain a number of official images, for example).

You can find images to use with Docker on the Docker Hub at

Anyone can publish public images to the Docker Hub. You can publish one private image for free. If you want to publish more private images you will have to get a paid subscription to Docker Hub — or you can set up your own private Docker Registry server.

Here is a really simple example Dockerfile that tells it to start with an image called “kstaken/apache2”, which is an Apache 2 build, and then run apt-get to install PHP, then copy over the file ‘index.php’ from the current directory then start the Apache server.

FROM kstaken/apache2
LABEL name "my-docker-deployment"
RUN apt-get update && apt-get install -y php5 libapache2-mod-php5 php5-mysql php5-cli && apt-get clean && rm -rf /var/lib/apt/lists/*
COPY index.php /var/www
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

You can find some more examples of simple Dockerfiles at

As another example, this is the Dockerfile we use to build a Node.js server with ffmpeg installed. The Dockerfile tells docker to copies files over from the local directory the Dockerfile is in, then to install modules, expose port 80 and start the Node.js server (which defaults to running on port 80):

FROM node:6.11
LABEL name “node-ffmpeg”
RUN apt-get update && apt-get install -y software-properties-common && add-apt-repository ‘deb jessie-backports main’ && apt-get update && apt-get install -y ffmpeg && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/src/app
COPY package.json /usr/src/app
COPY npm-shrinkwrap.json /usr/src/app
COPY index.js /usr/src/app
COPY lib /usr/src/app/lib
WORKDIR /usr/src/app
RUN npm install
CMD [ “npm”, “start” ]

Both these examples cancreate relatively large Docker images by default, but are great for getting started as they include full versions of PHP / Node.js so you shouldn’t run into any problems running your website inside Docker.

When it comes to creating images for production, you might want to consider using smaller images that are stripped down to include only things you need.

This will make them quicker to build and to deploy, but if you have a complicated application you might need to adapt your code, or add directives to your Dockerfile to get your application working inside a minimal Docker container as you might find it’s missing dependancies you need.

You can find a wide range of small pre-existing images that are ready to go— many of these are based around Alpine, a Linux distrobution which starts from only 5MB in size. Examples include Alpine Node and Alpine PHP which include Node.js and PHP respectively and are only around 50 MB.

The offical repository for Node.js actually provides builds based on alpine. You can swap between them in the above example by changing the initial line FROM node:6.11 to FROM node:6.11-alpine.

There are Docker images that include a wide range of languages, including Java, Ruby, Python, Go, etc. You might want to start with an offical image and if that works for you, then try seeing if you can optimise it by swapping out the image for a smaller version.

3. Build the docker image.

Once you have a sample Dockerfile, open a shell in the window of the project and build the image.

docker build .

The first time running the build command for a project it may take a while as it will need to download a disk image to use for the container (the one specified in the Dockerfile). The image will be cached in future runs so subsequent builds will be quicker.

After it’s downloaded any steps to build the image will run (such as copying files over). Typically this step takes less than a minute.

4. You should now have an image you can run.

When the build step returns, the last line should say something like “Successfully built 41efd0730098”.

The ID (e.g. 41efd0730098) is the build image. You can run this build image with ‘docker run’:

docker run 41efd0730098

If you want to get the ID’s of previously built images you can list them with:

docker image ls

5. Exposing software on a port

To expose port 80 from the virtual machine to port 3000 locally — for example, so you connect to a webserver running running in your Docker container— you need to use the -p option with ‘docker run’

docker run -p 3000:80 41efd0730098

You should now be able to connect to http://localhost:3000

Connecting to servers when using docker-machine

If you are using Docker Community Edition then the above steps are all you need. However, If you have installed and setup docker-machine yourself, networking works a little differently.

When running Docker on a Mac using docker-machine that has not been installed and configured using Docker Community Edition, networking works a little differently and you need to get the IP address of your virtual machine.

> docker-machine ip default

Instead of connecting to localhost (as in the steps above), you should instead connect to the IP address it displays — e.g.

6. Debugging.

If your default Docker image stops responding you can tell docker-machine to restart it.

docker-machine restart default

If it still doesn’t respond, you can forceably kill it and start it again.

docker-machine kill default
docker-machine start default

Sometimes instances stop responding during the build phase and it may hang, this should resolve it.

You will need to abort and run the build command again if it hangs during the build phase. Don’t worry if this happens, it can be a bit flakey on Mac.

If you can’t figure out what’s going on you can always try a reboot.

You can view containers you have built with ‘docker run’:

docker container ls

You can run ‘docker logs’ on the container that relates to the instance you interested in - this can help if you can’t figure out why an instance isn’t starting up properly, for example.

docker logs cf788a3be231

Deploying Dockerfiles

There are lots of platforms you can deploy Dockerfiles on.

Self-hosted Environments

One way to run Docker images is to setup a Docker Swarm — a scalable clustering solution built into Docker that can be deployed on AWS or Azure.

Another way to deploy Docker images in production is with Kubernetes, a more powerful system, but one which is also more complicated to configure and maintain.

Setting up either a Docker Swarm or Kubernetes cluster for the first time can be complicated and involves multiple steps; apart from initial cluster configuration you will need to set up a deployment system, adding and configuring persistent storage, and probably want to configuring a management interface and monitoring system too.

Depending on your resources and your application, you might find it’s easier and more cost-effective to use a third party service, especially if you have a small team or relatively simple requirements (such as just one or two components).

You may also find it’s easier to buy some parts of your stack as as services rather than have to maintain them yourself (eg. persistant databases, private Docker registries, monitoring, etc), this can save you time, hassle and money, even if you do have your own cluster.

Simple, Scaleable Cloud Hosting

Zeit and ‘now’

One of simplest to use and most powerful services you can use use to deploy Docker images to a cluster is ‘now from Zeit.

You can install “now” with npm install -g now then just type now in any directory with a Dockerfile in it to deploy it to the cloud — just make sure it exposes a port and everything else will be done for you.

You can use Now to handle deployment, rollback, cluster scaling, and even domain registration via the command line.

You can also use the ‘now’ command line tool to push Docker images to your own private servers on other cloud platforms.

For more details, see the Zeit website.

Other cloud hosted options

You can run Docker apps really easily on Heroku — which is easy to setup, deploy to and scale and will give you access to all the features and add-ons on the Heroku platform, though it can get expensive if you use a lot of resources.

Another offering that’s simple, very easy to use and has a flexible billing model is, which offers low cost containers to run your Docker images in and has a range of inexpensive add ons for storage, cron tasks and even a Docker based ‘serverless’ platform.

Hybrid options

If you want to run your own Docker server and would rather handle configuration and deployment yourself — but without the work that goes into setting up and running a cluster — hosting providers like Digital Ocean let you fire up a server with Docker pre-installed.

Running your own virtual server with Docker has a mix of advantages (cheap, highly flexible) with some downsides (requires some administration, not easy to scale as cloud hosted options) but can be the right fit for some projects.

If you have any tips for services you like that provide simple ways to deploy or manage Docker instances, please share them in the comments.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.