Dockercon 2016 — Day 2 Recap and Thoughts

The start of the second day was a stark difference from day one. Today was all about commercial offerings, showing that enterprises are adopting containers and convincing the crowd that is was ideal to place legacy applications into containers, not just brand new applications. To this end, Docker now has the burden of proving not only that users are using it in production but that enterprises are adopting it.


The statistics presented today were interesting but there were far too many to list in a blog post. The one I felt was the most impactful was that *60% of docker users are using it production*. This is amazing when you consider the age of the software (~3 years old) and sheer number of users. I suspect that by next year, this percentage to be north of 85%. Simply put, docker is a game changer and it’s getting massive adoption. The base Docker engine has been production ready for awhile but it’s the supporting services, such as Swarm, that are playing much needed catch up.

Sacrifice your old

Docker CEO, Ben Golub, made it clear that docker is not only for new applications but that it is the ideal way to run your legacy applications. Docker is the best mechanism to package those applications, their dependencies and have the ability to move them anywhere. I can’t agree more. While you still won’t be able to take advantage everything cloud or docker has to offer, you can still use it as a mechanism to move them. is not ideal for stateful applications but we welcome legacy ones!

Docker Datacenter

The Docker Datacenter is collection of tools you can use to go from 0 to Full docker in minutes. You’ll get Swarm, Trusted Registry, Universal Control Plane (UCP) and more to make this possible. It really is a compelling story. I can use UCP to see any of my swarm clusters, any of my containers or services, scale them as needed, docker images and security vulnerabilities that may exist in them. I can even use UCP to deploy containers based on the DAB file (announced yesterday). I assume most users are still going to use the CLI to interact with docker so UCP is mostly for insight and control.

As part of that control aspect, Docker has been constantly working on their security model and it’s coming along nicely. They can restrict users ability to perform actions based on labels, only allow signed images in repositories and enforce good SSL standards between swarm nodes behind the scenes.

The holistic Docker vision is starting to take shape. Perhaps one day we can get Docker Datacenter to work with other orchestration providers.

The Docker Store

Docker announced a marketplace where users can purchase images and deploy them easily. This is instantly more interesting than AWS’s marketplace since these containers can run anywhere you have a swarm.

The Microsoft Demo

Microsoft grabbed some time during the keynote to show off not only their docker integration but also their growth with Linux. I can’t accurately describe the whole demo as it was quite complicated but here is the gist:

There was a server on stage running a local version of Azure, called Azure Stack. It has support for all the Azure APIs, UI and a VPN to the public Azure. From this server the presenter was able to launch a docker datacenter via a template which had a swarm of linux VMs. He was then able to deploy an application into the swarm, debug it via Visual Studio Code with direct docker compose support, update it and redeploy it. This application relied on a MSSQL database but this database was running in an Ubuntu container on Linux virtual machine. Whoa.

This demo was really about showing how evolved Microsoft has become embracing both Linux and Docker.

Cool Hacks

And finally docker closed out the day with a cool hacks general session. These hacks were presented by the members of the community and could have easily been talks on their own.

The one I hack I really enjoyed showed using docker to create a serverless infrastructure. Serverless is a movement gaining traction recently due to AWS releasing Lambda whereby you store functions that can be called remotely or ran as response to some event. It allows for the complete removal of servers (except perhaps databases) from your infrastructure. It’s been said that serverless will make Docker irrelevant but the presenter showed how you could use docker to build your own serverless applications today. He did this by making docker images that only ran one function. He then called these functions from an application via a Docker client library which in turn ran the container with the arguments returned the result to the original caller.

While there are a number of issues with model, it’s a really cool idea. It will be interesting to see if this takes off in the Docker community.

That’s it for our Dockercon 2016 coverage. I’d love to hear your feedback, questions, thoughts in the comments below.

– Ryan Richard, Senior Engineer, • @rackninja