What we’re talking about here in this article is a cross-platform hybrid cloud built atop Docker. With our modifications made into the Docker tool set, here’s the list of surprise:
- We successfully mix our 50 Linux/ARM nodes and another 50 public Linux/x86–64 DigitalOcean nodes into a single virtual Docker engine using Docker Swarm, with modifications.
- We run a bare-metal Linux/ARM (not Linux/x86–64) cluster, and we manage to provision the cluster using Docker Machine with our Machine’s driver.
- Regardless of the hardware architecture behind (as it’s a mix of ARM and x86–64), we successfully use Docker Compose to scale Nginx to 100 containers over this 100-node cluster.
Hybrid Cloud and Docker
With the Docker tool set, we’ve been at ease already to setup a hybrid cloud. Docker Machine helps us create and manage container-centric machines. It comes with a large set of drivers, covering from local VirtualBox to IaaS such as Amazon EC2 or DigitalOcean. Docker Machine just makes them Docker-ready.
Docker Swarm helps us form a cluster of Docker instances, which are provisioned by Docker Machine. It doesn’t matter where the Docker engines are running. If their IP addresses are reachable, Swarm can form them to be a virtual single Docker host. Then it allows any Docker client to command the cluster. And we already have a Docker-based hybrid cloud.
Despite running anywhere, a limitation of Docker currently is that it officially supports Linux/x86–64 (aka AMD64) architecture only. This is understandable as Linux/x86–64 is the largest and prominant platform. As previously mentioned, we can run a Linux/x86–64 hybrid cloud with Docker, but what if we’d like to bring more than one hardware architectures together as a single endpoint.
ARM and Low-Power Clouds
Fact: the ARM architectures have been emerging to the server-class market.
ARM-based servers and clouds have been gradually emerging. AppliedMicro delivers X-Gene, the world first ARMv8 64-bit system-on-chip server. HP Moonshot, based on X-Gene SoC, brings this kind of servers to data centers already. A Manchester-based firm, DataCentred, built an OpenStack cloud platform atop Moonshot servers. In addition to ARM 64-bit class, Scaleway, a French startup, provides 32-bit ARM-based IaaS since last year.
In April last year, we wrote about an Aiyara cluster, a Spark/Hadoop cluster made with ARM boards. Its technical description was kindly published in the DZone’s Big Data Guide.
Since then, we have encountered a new problem.
Although we’ve been using Ansible to manage our Aiyara cluster with good success, managing software applications for a cluster is hard.
We concluded that we need a virtualization layer, even at the small scale. However, the Hypervisor approach is not an option for us because we use ARM processors for the cluster.
Is there any a better way?
Fortunately, during the same time we been working on Aiyara, Docker has been shining. We then was taking a look at Docker and putting efforts to get it running on our ARM-based cluster. At least, it’s up and running. Next, how should we do to manage Docker in the clustering mode?
In December last year, Docker announced a new set of tools to support its ecosystem namely Swarm, Machine and Compose. We plan to adopt Swarm to manage cluster-wide virtualization. So that’s our starting point to contribute to Swarm. Later in February, we found that Docker Machine will be really useful for us to help provisioning nodes of the cluster. But Machine has no driver to support provisioning a bare-metal hardware yet. So we decided to implement one, the Machine’s Aiyara driver.
The result is fantastic. It enables us to control both our cluster and public clouds at the same time using a simple workflow similar to what suggested in Docker Orchestration. Here’s what it looks like:
$ machine ls
NAME ACTIVE DRIVER STATE URL SWARM
rack-1-node-11 aiyara tcp://192.168.1.11:2376 rack-1-node-12 aiyara tcp://192.168.1.12:2376 rack-1-node-13 * aiyara tcp://192.168.1.13:2376 rack-1-node-14 aiyara tcp://192.168.1.14:2376 rack-1-node-15 aiyara tcp://192.168.1.15:2376
The only remaining but biggest problem was that Docker images are only available in platform-specific binary formats. If you build an image for the Linux/x86–64 (amd64) architecture, it will not natively run on a Linux/ARM machine. Although we can emulate it via qemu, but it’s not going to be a good enough choice.
We are lucky enough to come up with a good solution. Our versions of Swarm and Machine working together in-sync to make the cluster running both x86–64 and ARM images transparently.
Starting small, a 2-node Hybrid Cloud
Here’s the list of our smallest hybrid cloud. We have a node running on an ARM board, and another is a DigitalOcean 512mb machine. An extra local master node is acting as Swarm Master.
$ machine ls
NAME ACTIVE DRIVER STATE URL SWARM
master * none tcp://127.0.0.1:3376
ocean-1 digitalocean Running
rack-1-node-4 aiyara tcp://192.168.1.4:2376
Docker Machine makes orchestration easy. We can just use machine config to provide all configuration needed for the Docker client. You can see Docker Orchestration for more information.
$ docker $(machine config master --swarm) info
Filters: affinity, health, constraint, port, dependency
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 514.5 MiB
└ Labels: executiondriver=native-0.2, kernelversion=3.13.0-43-generic, operatingsystem=Ubuntu 14.04.1 LTS, provider=digitalocean, storagedriver=aufs
└ Containers: 1
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 2.069 GiB
└ Labels: architecture=arm, executiondriver=native-0.2, kernelversion=3.19.4, operatingsystem=Debian GNU/Linux 7 (wheezy), provider=aiyara, storagedriver=aufs
The above docker info command ran through Swarm Master. It showed that we have a cluster of 2 nodes, the first one uses the Docker Machine’s DigitalOcean provider. The second one is our Aiyara node. Both uses AUFS as their storage engine.
Next we tried to pull the Debian images to each node before running it.
$ docker $(machine config master --swarm) pull debianrack-1-node-4: Pulling debian:latest...
ocean-1: Pulling debian:latest...
ocean-1: Pulling debian:latest... : downloaded
rack-1-node-4: Pulling debian:latest... : downloaded
Well, ocean-1 clearly pulls the image faster ☺
Next we test running a simple command uname -a twice through Swarm. We use the debian image. Swarm will choose a node for the first run, then another for the second run because its default scheduling strategy is spread, an algorithm that will place containers as spread as possible.
$ docker $(machine config master --swarm) run debian uname -aLinux e75d5877493e 3.19.4 #2 SMP Mon Apr 20 02:39:39 ICT 2015 armv7l GNU/Linux$ docker $(machine config master --swarm) run debian uname -aLinux 6d6b9d406f88 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 GNU/Linux
Let see what’s behind this. We ran ps -a to show all containers inside the cluster. You may find it interesting that actually our ARM node ran a container with the different image used by the DigitalOcean node. The aiyara/debian:latest.arm is our Debian ARM image. Aiyara version of Swarm is clever enough to know that we’re going to place a new container on an ARM machine, so it chose the correct platform-specific image for us.
$ docker $(machine config master --swarm) ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d6b9d406f88 debian:latest "uname -a" 14 minutes ago Exited (0) 13 minutes ago ocean-1/focused_leakey
e75d5877493e aiyara/debian:latest.arm "uname -a" 14 minutes ago Exited (0) 14 minutes ago rack-1-node-4/modest_ritchie
A 100-node Cluster
Well, a 2-node cluster is too simple. Here’s a 100-node cloud in action.
As we already have 50 ARM nodes in our cluster, we the create other 50 nodes on DigitalOcean. Here’s the command for creating a DigitalOcean node.
$ machine create \
-d digitalocean \
We ran the above command for node ocean-1 to ocean-50. We use the default command 512mb VM, from the $5/mo plan.
It would be so harsh to scale number of nodes manually, but Docker Compose could help us handle it easily. We create a directory and name it aiyara_cloud. Then we place docker-compose.yml, a description file for our deployment unit, there. We will start a web Nginx server on each node, and we’d like to bind the host’s port 80 to the container’s exposed port. Here’s our YML description:
$ cat docker-compose.yml
We start the first node with the following command:
$ docker-compose up -d
To scale the web server to 100 node, we use:
$ docker-compose scale web=100
Then you will see that other 99 containers will be created from each platform-specific image and placed correctly to their hardware, one-by-one. Here’s the list of all running containers over a hybrid ARM/x86–64 cluster.
We thank DigitalOcean to let us run 50 Droplets at a time so we can create a large 100-node hybrid cloud mixing Aiyara cluster and DigitalOcean together.
Today, we have successfully built the first-known cross-platform hybrid cloud based on Docker with little modification to Machine and Swarm. These changes allow us to transparently use Docker over heterogeneous hardware.
We envision that this kind of hybrid cloud is important. This architecture will help us:
- balance performance and power consumption of the cloud
- gradually migrate images to your preferred platforms
- mix and match your hybrid cloud to utilizes the available resources
Microsoft is going to support Docker for the next version of Windows server, so there will be a kind of Windows/AMD64 Docker images available after that. Also Linux/AArch64 is already coming to the market. Fortunately, our cross-platform hybrid cloud architecture is ready today for them. If you’re already using Docker, this cross-platform hybrid cloud architecture will minimize changes of your DevOps workflow.
Originally published at dzone.com.