What is Docker? And a comparison to virtualization
Docker is an open source container platform that enables developers to ship, build and run applications in different environments. The goal of this platform is to minimize the number of components required to run distributed applications and to reduce the friction between development, testing and deployment.
This platform is different from virtualized environments as it is composed of only the application and the binaries required without requiring a complete host operating system and kernel that would add overhead, in particular the hypervisor. The main advantage of containers, and docker, is that they don’t require a full operating system to execute the application, as they only need the application and the required binaries or libraries. Docker provides a standard, well documented and reliable way to manage the underlying containers and for this reason it has gained popularity since it’s creation in 2013.
Docker is composed of the daemon that sits on the server machine and accepts commands from the docker client. The daemon communicates with the underlying Operating System through a libcontainer, which is a library that is able to run commands to manage the containers. The user communicates with the daemon through a docker client, which can be a command line or a user interface. Finally, the registry is a service is provided by the docker platform on the cloud to host the libraries and the applications through an image. This image can be installed as a container through the docker daemon and these packaged images can be created by any user and shared through the registry.
Docker (and containers) is considered as a disruptive technology to traditional virtualization as it provides considerable gains in efficiency and resource utilization by removing the need of an operating system running on top of a hypervisor. Given that the project is Open Source major players in the industry such as Google, RedHat, Rackspace and Canonical are adopting it.
The startup company launched in March of 2013 as an open platform and it has gained popularity very quickly in the startup and developer community with already over a hundred meetups (developer gatherings), around the world and also with the support many large companies using it such as Gilt, Yelp, Google, Microsoft and Rackspace to name a few.
While Docker has the potential to change the IaaS space, it needs to evolve beyond a packaging framework since developers are still executing the same code in similar infrastructure, which is on virtualized environments such as AWS. One of the top arguments that this is about to shift is with the release of bare-metal servers, which are basically physical server nodes that can be provisioned in the same manner as virtualized environments, however, the cost of each of the bare-metal servers is considerably higher than virtual machines.
Early benchmarks have been run on docker containers comparing them to virtual machines and the results have shown that docker provides better startup and stop times. For example, in a test run by developers at a startup company called Flux 7, docker containers would start and stop in less than 50 milliseconds, while virtual machines would take between 30 and 45 seconds to start and up to 10 seconds to stop. In other tests, the memory, storage and CPU benchmark of Docker was similar to that of KVM and bare-metal hardware. The main difference is in the network latency due to the routing mechanisms in docker that produced delays while executing the performance tests. In another test run on containers by IBM, the random IO throughput (IOPS), results showed that during random operations, the performance of virtual machines was considerably slower than to Docker and Native, therefore providing the most benefit.
Docker is considerably more efficient than virtualization technology as it doesn’t require an operating system in each virtual machine. Also, docker containers are smaller so more containers can live in the same hardware. While the network latency could be impacted with the routing of networking packages to the corresponding container, this limitation will probably be overcome once docker evolves in a similar way to how a disruptive technology overtakes the incumbent as it is improved.