Docker Swarm on Google Cloud Platform

Sandeep Dinesh
Google Cloud - Community
7 min readJun 23, 2016

There are some interesting things going on with the new Docker 1.12 release. Docker is bundling Swarm into Docker itself, as well as upgrading Swarm with more mature container orchestration abilities.

Swarm now joins Kubernetes, Mesos, and Nomad as a fully-fledged orchestration engine. With these new orchestration abilities, I wanted to take another look at Swarm. I also like the name “Swarm” ;)

Swarm does not yet have an out-of-the-box option on Google Cloud. Hopefully this is added soon, but until then let’s look at how you can manually set up a Swarm cluster on Google Cloud Platform.

All in all, you should have a fully functional Swarm cluster in about 15 minutes!

In fact, Google Compute Engine is so fast at provisioning VMs that if you run the node creation and setup in parallel you can create a cluster in around five minutes! Crazy fast!

Use this script for a fully automated install!

Note: Swarm 1.12 is still very young. Documentation is sparse and still evolving. I expect things to get better very soon!

Prerequisites

You need Docker 1.12 to get the upgraded Swarm features. Here is what I have:

$ docker -v
Docker version 1.12.0-rc2, build 906eacd, experimental
$ docker-machine -v
docker-machine version 0.8.0-rc1, build fffa6c9

You also need the Google Cloud SDK installed, as well as a Google Cloud project. Once you do that, make sure you log in:

$ gcloud init

Creating the Swarm

The first step is to create the Swarm nodes.

Ideally you would use a Managed Instance Group, but I’ll save that for another tutorial.

Let us create a manager node and a worker node.

Manager Node Setup

Use docker-machine to create the manager node:

$ docker-machine create swarm-manager \
--engine-install-url experimental.docker.com \
-d google \
--google-machine-type n1-standard-1 \
--google-zone us-central1-f \
--google-disk-size "500" \
--google-tags swarm-cluster \
--google-project <YOUR_PROJECT_ID>

Replace <YOUR_PROJECT_ID> with your project ID. Also feel free to change the zone, machine type, and disk size. The important thing to do is tag this instance with the “swarm-cluster” tag, which will let us open firewall ports later on.

In about five minutes the manager will be created.

Now we need to set up this machine as a Swarm manager

$ eval $(docker-machine env swarm-manager)
$ docker swarm init

Your manager is now created!

Worker Node Setup

Create a worker node the same way as the manager:

$ docker-machine create swarm-worker-1 \
--engine-install-url experimental.docker.com \
-d google \
--google-machine-type n1-standard-1 \
--google-zone us-central1-f \
--google-disk-size "500" \
--google-tags swarm-cluster \
--google-project <YOUR_PROJECT_ID>

Now we need to get the IP address of the manager so we can join the Swarm.

$ gcloud compute instances list
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP
swarm-manager us-central1-f n1-standard-1 10.240.0.0 130.x.x.x
swarm-worker-1 us-central1-f n1-standard-1 10.240.0.1 104.x.x.x

Use the internal IP for the Swarm Manager to connect your worker to the Swarm. The default networking setting opens all ports on the internal subnet so you don’t have to mess with firewall rules.

$ eval $(docker-machine env swarm-worker-1)
$ docker swarm join <SWARM_MANAGER_INTERNAL_IP>:2377

Repeat these steps to add more workers to the Swarm.

Log back into the Swarm Manager to start executing commands.

$ eval $(docker-machine env swarm-manager)

For example, you can see all the nodes in the Swarm:

$ docker node ls
ID NAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS
xxx * swarm-manager Accepted Ready Active Leader
yyy swarm-worker-1 Accepted Ready Active

You are done with the cluster setup!

Creating a Service

Creating a service is straightforward. It’s basically the same commands you would use in normal Docker.

For example, to start a single nginx server on port 80, run:

$ docker service create --replicas 1 -p 80:80/tcp --name nginx nginx

And we can see the process running.

$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
2umwwwc6tu9d nginx 1/1 nginx

Swarm will make sure that the replica is always running. We can also scale up and down the number of replicas with a single command:

$ docker service scale nginx=10

Pretty cool stuff! There are more complicated things you can do, but documentation is sparse. The launch blog post has the best information so far.

Exposing a Service

Now you have nginx running in your Swarm, you have to open it up to the outside world. By default, Swarm will expose the service on the specified port on every node in the Swarm. This is very similar to creating a Service in Kubernetes with NodePort. We need to expose this port!

Side Note: I really hope Docker adds in native support for Google Cloud Platform so that these things are automatic, similar to how they function in Kubernetes.

Option 1 — expose a single node:

The easiest thing to do is open the port on a node, and use the node IP address for your website or service.

I would use this option for small clusters that have a single manager and a few workers.

Open the port on the Swarm instances

$ gcloud compute firewall-rules create nginx-swarm \
--allow tcp:80 \
--description "nginx swarm service" \
--target-tags swarm-cluster

Now get the external IP address of the nodes.

$ gcloud compute instances list
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP
swarm-manager us-central1-f n1-standard-1 10.240.0.0 130.x.x.x
swarm-worker-1 us-central1-f n1-standard-1 10.240.0.1 104.x.x.x

Use the external IP of one of the nodes to reach your service. I recommend using the manager’s IP address.

Option 2— Round-Robin DNS:

Using the same steps as option one, you can use all the external IP addresses with Round Robin DNS. This basically gives you a form of load balancing for free! The only problem with this is if you start removing or adding nodes in your cluster, you need to update your DNS settings every time. DNS clients also cache heavily, so if you scale down it is possible your users will hit nodes that no longer exist.

If you have multiple managers, I would use this method to provide a simple form of load balancing and fault tolerance.

Option 3— Google Cloud Load Balancer:

This is the most robust, but also the most complicated, method for exposing your service. When you create a Network Load Balancer, you get a single IP address, but traffic is sent to all the nodes in the Swarm. Additionally, you can set up a health check so if a node goes down, traffic is not sent to it.

If you want the best reliability, or have larger cluster that may be spread over multiple zones for high availability, I recommend this option.

I will set up a TCP Load Balancer. You can also set up a more powerful HTTPS Load Balancer if that is appropriate for your service.

While you can do this on the command line, I find the UI to be more intuitive.

Open the Load Balancer page.

Click “Create Load Balancer”

Now click “Start configuration” for the TCP Load Balancer

Give your Load Balancer a name

Now click “Backend configuration”

Select the region your Swarm is in, then click “Select existing instances.” Add all the Swarm nodes.

Now create a health check.

Give your health check a name, and configure the numbers how you see fit. I used the defaults. We are going to ping port 80 (where the nginx service lives) every 5 seconds to make sure the node is healthy. Save and continue.

Now go to the Frontend Configuration, and specify the port for your Swarm service.

Finally, click “Create,” and the Load Balancer will spin up in a few minutes.

You can get the IP address for your Load Balancer with this command:

$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
nginx-swarm us-central1 104.xxx.xxx.xxx TCP xxx

Conclusion

With the new 1.12 release of Docker, Swarm was very easy to setup and use. Once it is officially released, it will be even easier. Great job by the Docker team!

I hope Docker adds more documentation and examples, and I really hope they add support for native Google Cloud features so people don’t need to mess with Firewalls and Load Balancers!

I also plan on doing a comparison between Swarm and Kubernetes. I found there to be a lot of differences and similarities. Stay tuned for that!

--

--