Multi-Host Linking, Service Discovery, & Load Balancing

Phil Dougherty
ContainerShip Articles
3 min readJun 2, 2015

Using Docker links across multiple hosts — Nope.

Introduction

When running a clustered micro-service infrastructure, it can be difficult for your applications to find and access the services they needs to function across a large number of servers. It can be especially difficult to manage as you scale certain services up and down to properly route requests to the right place.

As you scale, containers can be placed on any number of follower hosts in your cluster, on a large range of ports, so maintaining connectivity with all of those containers using traditional means doesn’t work anymore.

  • Manually editing configuration files on your load balancer — Not fun or dynamic.
  • Using Docker links across multiple hosts — Nope.
  • Doing some kind of chef search/chef node attribute wizardy — Painful and slow.

We’re going to show you how easy your life can be with ContainerShip. We let you scale any service up and down while automatically maintaining a list of upstream containers. Then you can reference applications and services by name to automate connectivity with your other applications.

Why Network Modes Matter

In ContainerShip, there are two network modes that can be used that map to the two modes available in Docker.

  • Host — Use the host (follower) system’s network stack and bind directly to the host system’s port. This is the network equivalent of running directly on the host itself. This has the benefit of less overhead and better performance. One downside is that you can’t run multiple nginx containers that both listen on port 80, for example.
  • Bridge — Bridge mode configures a random port for the container on the host system’s network stack, and routes traffic through a linux bridge interface. The benefit of doing this is that your container could be exposed on port 12345 on the host but the software inside it could be listening on port 80.

By default all applications launched on ContainerShip are configured to use bridge networking which increases flexibility but can make managing ports complicated. Keeping track of what hosts and ports certain containers are running on gets difficult in a hurry and DNS SRV records or a TCP proxy alone won’t save you.

If required, host based networking can be selected via the Create Application menu in Navigator, via the API, and the CLI.

Linking Across Hosts & Automatic Load Balancing

Imagine you launched an Apache Cassandra cluster on ContainerShip called ‘mycassandra’. Later you decide to launch a NodeJS application that needs to utilize ‘mycassandra’, and does so by accepting two environment variables CASSANDRA_HOST and CASSANDRA_PORT. When you’re only running on a single server you could use Docker’s links, but that breaks down when you are utilizing multiple servers.

In ContainerShip all you need to do is reference ‘mycassandra’ by setting the following two environment variables when launching your NodeJS app.

CASSANDRA_HOST=$CS_ADDRESS_MYCASSANDRA
CASSANDRA_PORT=$CS_DISCOVERY_PORT_MYCASSANDRA

All traffic from then on will automatically do round robin load balancing across all of your ‘mycassandra’ containers.

Here is an example of this being automatically populated for you when launching applications in Navigator.

Conclusion

ContainerShip makes it easy to scale your applications and access them from your other applications and services without a bunch of hassle.

We went through the trouble of gluing together 4–5 different popular open source projects to try to accomplish these goals in our past jobs to great stress.. So we decided to build the best solution available into ContainerShip. It really is easy!

Look out for more posts in the future that go into greater detail about the open source tools we have developed to power these types of systems!

--

--

Phil Dougherty
ContainerShip Articles

Co-Founder @containershipio, Husband, Systems Engineer, Manager, Pittsburgher, Pitbull lover.