The easiest way to deploy Docker
with services discovery

VisualOps
3 min readSep 10, 2014

Service Discovery is a heated topic in the Docker world. Simply put,

In SOA/distributed systems, services need to find each other. i.e. a web service might need to find a caching service, etc. DNS can be used for this but it is nowhere near flexible enough for services that are constantly changing. A Service Discovery system provides a mechanism for:

- Services to register their availability
- Locating a single instance of a particular service
- Notifying when the instances of a service change

A number of projects have been looking into how one might solve the problem. Despite various technical differences and design philosophies, most of these efforts are built around the idea of Announce/Lookup, which requires additional components to register, store and discover services. In some deployments, this method is perfect, but it certainly sets a higher learning curve and commands far more moving parts in the cluster. Arguably, until you are running a large scale cluster, the frequency of configuration change is not that high, which will make the Announce/Lookup style less attractive.

The VisualOps way

At VisualOps, we always try to keep things simple. Then, we came up with the idea to resolve the Docker service discovery issue in a simple and non-intrusive way. Let me explain:

With VisualOps, you can use State to define the instance configuration, and Link to setup cross-instance connections. Instead of letting instances announce and discover each other via a central database, VisualOps takes the explicit route and specifies the logical relationship between services. Then VisualOps will render the Link upon provisioning:

mysql://root@@{db.PrivateIpAddress}:3306 — ->
mysql://root@10.0.0.4:3306

This technique works great in a VM-based environment, and now we want to do the same thing in Docker by leveraging the file mount feature to mount the application configuration file from the host instance into the container:

$ sudo docker run —rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash

Thus, the solution is illustrated as:

Technically speaking, to deploy a multi-instance Docker application, all you need to do is to specify three things:

- which docker image to run, i.e. my/node
- the container setup, i.e. port, cpu, mem
- the app configuration file content and link with other containers

In VisualOps it looks like this:

Automation & Orchestration

Note that this is not a static configuration of your containers. CD/CI is done automatically: once a newer version of the Docker image is committed, it will be pulled to recreate the corresponding containers. Also, VisualOps keeps a watchful eye over the cluster, if autoscaling or failover is triggered, VisualOps will re-render the file content to make sure the containers always get the correct connection. You don’t need to change any code or access another etcd cluster.

Not so sexy, but dead simple

Docker is a great tool and we want to contribute to it. Quite frankly it’s a booming ecosystem and we envision that there will be more and more tools emerging for different use cases/requirements. Eventually, it will be a simple choice of flavor. I know that the service discovery solution we’ve built with VisualOps is not particularly sexy, but what it is, is dead simple to understand and learn; and it works an absolute charm.

Try it.

--

--