“The map is not the territory” — Alfred Korzybski.
First World War. A small group of Hungarian troops camps in the Alps. Conditions are harsh: inadequate shelter, unhealthy food or no food, soldiers often wear borrowed clothes. Their commander, a young lieutenant, decides to send out a small group of hist best men on a scouting mission. Shortly after the scouting group leaves, it begins to snow, and it snows steadily for two days. After two full days, there are no signs of the scouting squad. The young officer thinks he has sent his men to their deaths.
But unexpectedly, on the third day, the long-overdue scouting squad returns. There are a great joy and a sense of relief in the camp. The lieutenant welcomes his men and starts questioning them eagerly. “Where were you?” he asks. “How did you survive, how did you find your way back?”. The sergeant that led the scouts replies, “We were lost in the snow, and we had given up hope. We had resigned ourselves to die. But then one of the men found a map in his pocket and with its help we knew we could find our way back. The young commander asks then to see the map. As he studies the map, a perplexed look starts crossing his face: it was a map not of the Alps but the Pyrenees!.
Based on a story first told by Albert Szent-Györgyi, Hungarian physiologist who won the Nobel Prize in Medicine in 1937 for discovering vitamin-C and served in the First World War.
There are different angles to this story. We know the map inspired hope among the soldiers and gave them the confidence to carry on. Maybe they simply got lucky or maybe the map helped them to make good decisions, though on false grounds. In either case, I could only wonder what these brave men could have achieved if they have had the right wear, the proper supplies, the right map in the first place.
As a consultant, I work with development teams and more often than not, I have the same kind of feeling. Most are understaffed, with little or no technical resources, using inherited systems and platforms, applying ad-hoc tools, borrowed approaches and code to meet deadlines.
But, against all odds, they still manage to bring it home. I could only wonder what these teams could achieve if they have the right tools. I wonder what these teams could have achieved and how much pain could have been avoided if they had used Docker !!!!
Exploring new territories
For them and for the few people out there who may not know what containers are or what Docker is, here is a short video in which Docker CEO, Ben Golub, tells in plain English how container technology works and what benefits could bring to development (and operational) teams.
Ben explains Docker and containers in a breeze, as a map describes and help you understand the territory at a glance. But Docker has grown to mean many things since Solomon Hykes first showed it to the public at PyCon in March 2013. It began earlier as an internal software project to make the deployment of applications easier on dotClound, a PaaS provider. It has since grown into an ecosystem of tools that could package applications together with their dependencies and deploy them anywhere: on the developer’s laptop, on premises behind your firewall and any public cloud.
Same territory, different maps
Some argue that Docker should no longer be considered a set of tools, but rather an approach to software packaging and distribution. And there’s no doubt that containers make shipping applications dramatically easier and faster when comparing with traditional approaches.
Some go beyond and say that Docker has revolutionized the way software is architected as it facilitates the building of modular, micro-service oriented applications . And, indeed, containers seem to be a natural ecosystem for micro-services applications.
Docker website summarizes all this well: Docker — Build, Ship, and Run Any App, Anywhere.
You’ll also find Docker mentioned in other contexts and territories. Docker and DevOps are often used in the same phrase: “DevOps grows, and Docker spreads like wildfire, especially in the enterprise” (Rightscale State of the Cloud report, January 2016). And this makes sense too, as one of the promises of containers is to offer “separation of concerns” between development and operations teams. Docker is becoming instrumental in facilitating both the cooperation between these teams and the automation of code pipelines.
But, all in all, it’s Docker’s mission that gives you an understanding on what Docker really is at its core. Docker is about “creating tools of mass innovation”.
A map legend
Although Docker makes things easier around applications when you first start using it could be confusing. All tools are named “Docker” something. You have a Docker Client, a Docker Engine. You have a tool called “Docker machine.” You could build a Docker Swarm cluster. You could pack your multi-container applications with “Docker Compose”. Docker…Docker…Docker… So, let’s try to make sense of all these tools and their relationship and review some key concepts.
The X on the map
Docker Engine. It’s a lightweight runtime tool that builds and runs Docker containers. How does it work? Docker at its core is a client-server application. The Docker Client talks to the Docker Engine through a RESTful API, to execute commands to build, ship and run containers.
There are four fist class citizens in the Docker Engine world -they all have their IDs- , and by getting them to work together you could build, ship and run applications, anywhere:
- Images. Images are used to package applications and their dependencies. Images could be stored locally or in a registry, a service that organizes and provides repositories of images.
- Containers. A container is running instance of a Docker image.
- Networks. You could connect and isolate containers on private networks that only exist within a host and, since Docker 1.09, on private networks that could span multiple hosts.
- Volumes. Volumes are designed to persist data, independently of the container’s life cycle.
We put together the following chart illustrating the most common Docker Client commands and their relationship with images, containers, networks and volumes.
Something really cool about Docker Engine is that follows the “batteries included but removable” principle. You could extend the core capabilities of the Docker Engine by loading third-party plugins. Currently, Docker supports authorization, volume and network driver plugins. For example, Flocker is a volume plugin that provides multi-host portable volumes for Docker, enabling you to run databases and other stateful containers and move them around across a cluster of machines. Weave is a network plugin that creates a virtual network that connects your Docker containers — across multiple hosts or clouds and enables automatic discovery of applications.
Docker offers three image distribution tools that store and manage Docker images. To host your private images, you may use either a hosted registry as a service or run your registry.
- Docker Registry. It’s an open source image distribution tool that stores and manages Docker images behind your firewall.
- Docker Trusted Registry. It’s a commercial image distribution tool with a graphical console that stores and manages Docker images. DTR provides an additional set of features to help enterprises address security and regulatory compliance requirements.
- Docker Hub. It’s a cloud hosted service from Docker that provides registry capabilities for public and private image repositories. The Docker Client defaults to the Docker Hub if the registry is not specified.
- Docker Machine. It’s a provisioning tool that makes it really easy to go from “zero to Docker”. Machine creates Docker Engines on your laptop, on most popular cloud providers — AWS, Azure, Google Cloud, Softlayer…- or in your data center — VMware, Openstack. Docker Machine creates virtual servers, installs the Docker Engine on them, and finally configures the Docker Client to securely talk to them.
- Docker Swarm. It’s is a clustering tool for Docker containers. It pools together several Docker Engines and exposes them as a single virtual Docker Engine. Swarm serves the standard Docker API, so any tool which already communicates with a Docker Engine can use Swarm as their backend to transparently scale to multiple hosts. Swarm clusters could be configured and deployed with Docker Machine too. Docker Swarm is designed for scale: Docker released the swarm-bench code on GitHub they used to deploy 30,000 containers in 1,000 AWS nodes with just 1 Swarm Manager. You could also see how it fares against other similar tools here. It is worth mentioning that Docker Swarm is lightweight enough to deploy additional orchestration tools like Kubernetes, Mesosphere Marathon or Nomad on top.
- Docker Compose. It’s an orchestration tool that makes spinning up multi-container applications effortless. Docker Compose can run multi-container applications on anything that can speak the Docker API, including Docker Swarm with a single command. By default Docker Compose sets up a single network for your application. Each container joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name. When you run Docker Compose against a Docker Swarm, it uses a multi-host network automatically.
- Docker Universal Control Plane. It’s a commercial management tool with a graphical console for containerized applications regardless of where they are running: behind your datacenter firewall or on public clouds. It provides an integrated and SSO experience with Docker Trusted Registry and Docker Swarm. Docker Compose is built directly into the UI so users can deploy multi-container apps across clusters with ease.
- Docker Datacenter. It’s a commercial subscription that integrates Docker Trusted Registry, Docker Universal Control Plane and a commercially supported Docker Engine. Adam Herzog — from Docker- summarizes well what synergies and benefits you get when you combine these three products: “Docker Universal Plane (UCP) enables enterprises to manage and deploy their applications to any environment, whether on-premises, private or public cloud. Since UCP runs on top of Swarm it benefits from the very same Docker Native APIs that Swarm can use, and can scale to a production level environment. You can also create and manage clusters all from within UCP. UCP also comes with LDAP/AD integration to easily set up teams and orgs within the GUI, the ability to set Role-based-access-controls so you can secure your images, HA out of the box, TLS out of the box and has a Native integration with DTR. This allows you to SSO into DTR and pull images from the registry into UCP. Docker Content trust signs images, and enables IT Operations teams to set policies like “only pull signed images” into production. You can also use Docker Compose to deploy applications into UCP”.
- Docker Cloud. It’s a commercial cloud hosted service from Docker that makes easy to manage and deploy from single container applications to distributed micro-services stacks to any cloud or on-premises infrastructure. You could think of Docker Cloud as hosted “Docker Datacenter”.
Tools for local environments
- Docker Toolbox. Because Docker Engine uses Linux-specific kernel features, you can’t run Docker Engine natively in Microsoft Windows or Mac OS X as it is, you need a virtual machine with Linux in these OSes. The Docker Toolbox installs everything you need to get started with Docker on Mac OS X and Microsoft Windows. It includes the Docker Client, Docker Compose, Docker Machine, Docker Kitematic- a GUI- , and uses Oracle VirtualBox to deploy Docker Engine on top of a Boot2Docker Linux Distribution.
- Docker for Mac and Docker for Windows. These two new tools -still on beta- are faster alternatives to Docker Toolbox. There’ s no need to use Oracle VirtualBox! On Mac, the Docker Engine runs in a xhyve Virtual Machine (VM) on top of an Alpine Linux distribution. On Windows, the Docker Engine is running in a Hyper-V VM on top also of an Alpine Linux distribution.
Beyond the borders
But wait! There’s more:
- Docker also leads some other open source projects around containers and software infrastructure plumbing, like RunC, a tool for spawning and running containers according to the Open Container Initiative specification.
There’s a growing ecosystem of third party tools around:
- Orchestration, like Kubernetes or Mesos Marathon.
- Clustering, like Fleet or Nomad.
- Registries, like Quay.io or Artifactory.
- Managed container services, like AWS ECS, Azure Container Service or Google Container Engine.
- GUIs, like Panamax or Simple Docker UI.
Docker tools allow you to build, ship, and run any application, anywhere.
- Build: Docker Engine (docker build -t) and Docker Compose (docker-compose build, for multi-container applications)
- Ship: Docker Registry, Docker Trusted Registry behind your firewall, Docker Hub (SaaS)
- Run: Docker Engine (docker run) , Docker Swarm (pool of Docker Engines), Docker Compose (docker-compose up)
- Manage: Docker Universal Control Plane (behind your firewall) and Docker Cloud (CaaS)
- Provisioning of Docker Engines: Docker Machine (on most popular public cloud providers or in your datacenter) or Docker Toolbox, Docker for Mac, Docker for Windows (for your laptop)
I hope this map turns out right and you find your way around Docker tools!