Traditional Clustering in the Cloud, part 1: introduction

Nate Aiman-Smith
RunAsCloud
Published in
3 min readSep 15, 2017
Clustering diagram using common components. (source)

Creating highly available (HA) applications in the cloud is generally pretty simple; ensure that no single component’s failure can bring down your application. The general wisdom for AWS is to be redundant to the level of the Availability Zone (AZ). Or, to put it another way:

Your application should be able to withstand the loss of an Availability Zone without suffering a major impact to the application

If you’re building a new application from scratch, this is usually pretty easy to do: make sure that every critical component is distributed across multiple AZs or based on a managed service that’s inherently redundant across AZs (i.e. S3). Some components (such as web or application servers) can be distributed across AZ’s using a Elastic Load Balancer. Other components (such as databases) have managed offerings like Multi-AZ RDS that make HA relatively straightforward.

Although I always prefer to let AWS deal with HA wherever possible, I’ve occasionally run into scenarios where I’ve needed to set up a clustered service in AWS. Unfortunately for me, traditional clusters generally have three specific components, none of which are easy to do on AWS. These components are:

  • Service IP: generally each node in a cluster will have its own unique IP, and clients will connect to a third, different IP that is assigned to the active server in the cluster. When a failover occurs, the new active node takes over the service IP address from the old node.
  • Shared disk: in the case of a cluster service that requires storage, each server will usually be connected to the same physical disk. In the old days, this meant a special type of enclosure with two connections, but today it usually means a LUN in a SAN that’s presented to HBAs in two or more servers.
  • Heartbeat: clusters need some way to keep track of each other’s health, and for the passive node to decide its time to perform a failover. Generally, folks use pinging for this, but ideally there will be a second option such as a dedicated NIC with a crossover cable, or a small section of the disk used for cluster communication.

Each one of these components has a significant challenge when it comes to AWS:

  • Service IPs need to be portable between hosts in a cluster, but VPC Networking requires that each subnet be tied to an AZ. Or, to put it another way, the same IP cannot exist in two different AZs. You could, theoretically, have a secondary IP that gets moved from server to server in the same AZ, but that would be breaking the fundamental rule of HA at the AZ level.
  • Shared disk has long been on everyone’s wish list, but will probably never materialize. An EBS volume (AWS persistent disk) cannot be attached to more than one EC2 instance, period. Furthermore, EBS volumes can only be attached to EC2 instances in the same AZ.
  • While it’s still possible to use a ping for heartbeat, it’s not necessarily reliable as a health determination mechanism. And, unfortunately, we don’t have a lot of other avenues. We could create a second set of virtual NICs just for heartbeat communication, but at the end of the day a VPC is all virtual networking anyway, and that secondary virtual NIC can’t be assumed to be isolated from the first in any way.

So, what do we do? Well, the trick is to figure out which of these services we’re going to need for our cluster, and then find a way to re-create them in AWS. In the next chapter, we’ll cover the first topic: Service IPs.

If you find this stuff as interesting as I do, please let me know in the comments. If you want to know more about high availability in AWS, feel free to connect with me, send me a message, or reach out to info@runascloud.com.

--

--