On Friday we had a Docker workshop held by one of the craftsmen, Chris. I know people have talked about it a lot and it seems like a big deal, so it was cool to actually try it out.
Before explaining what a Docker is it’s probably useful to explain what a VM is because they are quite similar. Docker tackles the problem in a different way and is therefore more relevant in certain circumstances.
A virtual machine is like spinning off a mini computer from your actual computer. This can mean that it uses up certain pieces of hardware or software that already exist on your computer in order to simulate another machine. It does this by taking the resources and memory it needs from your current machine.
The connector that allows this to happen is called the hypervisor. This is the part that ‘talks’ to the host OS and allows your VM to take the resources it needs.
It’s possible for a hypervisor to be a bare metal hypervisor or a hosted hypervisor, which means it uses the system resources directly (bare metal) or it uses the host OS (hosted).
Docker unlike a VM, has something called the Docker engine that sits on top of the host OS and each container shares the OS resources via the Docker engine. This is different to a VM which takes up it’s own resources and does not share them across other VMs. Docker can therefore be lightweight and is relatively easy to use.
The Docker engine is how Docker interacts with your OS interacts so that it can build containers. It does this using client-server architecture. The client talks to the daemon (background process) whenever you type in a command like docker run or docker build.
If you are using a Mac in order to enable the daemon to run you probably need the following command which configures your shell so that the docker daemon can run :
eval "$(docker-machine env default)"
A Docker Image is like a blueprint for a container and the Docker Registry holds public or private images you can download and is open source. https://docs.docker.com/registry/introduction/.
Docker containers are like directories and should hold all the information you need for a program to run.
Putting it together
I think what makes Docker quite confusing is the layering that it uses. This is what makes it lightweight and popular, but it also makes understanding how the components interact a bit tricky.
Docker uses something called Union File System (UFS) to build these layers. According to Wikipedia a UFS:
‘It allows files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system. Contents of directories which have the same path within the merged branches will be seen together in a single merged directory, within the new, virtual filesystem.’
So file paths that are duplicated are not created again but overlaid with the new parts.
Each image starts with a base image and images are built up with other instructions once this is done. These instructions are held in a Dockerfile. This basically tells Docker how you want to build your final image.
Here is a simple Dockerfile I made at the workshop on Friday:
The base image here is Ruby Version 2.2.2.
The instructions are the COPY and CMD following on from the base image. This just copies the file over to to the container and executes the command ruby simple.rb.
It’s also useful to tag images with an easy to remember name instead of
Once the image has been built up, Docker adds a readable and writable component is layered on-top of the read-only image that has been created from the Dockerfile.
When you call docker run with various options which allow you to run a new container from where you can run your app.
What’s cool is that Docker also creates a network interface that allows you to talk to local host and it sets up an IP address for you (this was useful for me because I was using a virtual machine that had a IP address allocated to it).
Docker is coming out with a Mac version soon though which means you can use the interface to talk to localhost.
So that’s a pretty broad overview of what I’ve understood of Docker so far. I’m sure it gets much more complex, but that’s for another day! I’m going to end this post on the cute Docker whale, here it is!