How to Migrate Your Cassandra Database to Kubernetes with Zero Downtime
Author: Alexander Dejanovski
K8ssandra is a cloud-native distribution of the Apache Cassandra® database that runs on Kubernetes, with a suite of tools to ease and automate operational tasks. In this post, we’ll walk you through a database migration from a Cassandra cluster running in AWS Amazon Elastic Compute Cloud (EC2) to a K8ssandra cluster running in Kubernetes on AWS Elastic Kubernetes Service (EKS), with zero downtime.
As a Cassandra user, your expectation should be that migrating to K8ssandra would happen without downtime. To achieve zero downtime, you can use “classic” clusters running on virtual machines or bare metal instances and the data center (DC) switch technique, commonly used in the Cassandra community, to transfer clusters to different hardware or environments. The good news is that it’s not very different for clusters running in Kubernetes because most Container Network Interfaces (CNI) will provide routable pod IPs.
Routable pod IPs in Kubernetes
A common misconception about Kubernetes networking is that services are the only way to expose pods outside the cluster and that pods themselves are only reachable directly from within the cluster.
Looking at the Calico documentation, we can read the following:
“If the pod IP addresses are routable outside of the cluster then pods can connect to the outside world without Source Network Address Translation (SNAT), and the outside world can connect directly to pods without going via a Kubernetes service or Kubernetes ingress.”
The same documentation tells us that the default CNI used in AWS EKS, Azure AKS, and GCP GKE provide routable pod IPs within a virtual private cloud (VPC).
A VPC is necessary because Cassandra nodes in both data centers will need to communicate without going through services. Each Cassandra node stores the list of all the other nodes in the cluster in the
system.peers(_v2) table and communicates with them using the IP addresses stored there. If pod IPs aren’t routable, there’s no (easy) way to create a hybrid Cassandra cluster that would span outside a Kubernetes cluster’s boundaries.
Database migration using Cassandra data center switch
The traditional technique to migrate a cluster to a different set of hardware or environment is to:
- Locate nodes in the target infrastructure and add a new data center to the cluster
- Configure keyspaces so that Cassandra replicates data to the new data center
- Switch traffic to the new data center once it’s up to date
- Decommission the old infrastructure
While this procedure was brilliantly documented by my co-worker Alain Rodriguez on the TLP blog, there are some subtleties related to running our new data center in Kubernetes, and more precisely, using K8ssandra, which we’ll cover in detail here.
Here are the steps we’ll go through to perform the migration:
- Restrict traffic to the existing data center
- Expand the Cassandra cluster by adding a new data center in a Kubernetes cluster using K8ssandra
- Rebuild the newly created data center
- Switch traffic over to the K8ssandra data center
- Decommission the original Cassandra data center
Performing the migration
Our starting point is a Cassandra 4.0-rc1 cluster running in AWS on EC2 instances:
In the AWS console, we can access the details of a node in the EC2 service and locate its VPC id, which we’ll need later to create a peering connection with the EKS cluster VPC:
The next step is to create an EKS cluster with the correct settings so that pod IPs will be reachable from the existing EC2 instances.
Creating the EKS cluster
After cloning the project locally, we initialize a few env variables to get started:
Then, we go to the env directory and initialize our Terraform files:
We can then update the
variables.tf file and adjust it to our needs:
Ensure the private classless inter-domain routing (CIDR) blocks are different from those used in the EC2 cluster VPC otherwise, you may end up with IP addresses conflicts.
Now create the EKS cluster and the three worker nodes:
The operation will take a few minutes to complete and output something similar to this:
connect_cluster command, which will allow us to create the kubeconfig context entry to interact with the cluster using
We can now check the list of worker nodes in our Kubernetes cluster:
VPC peering and security groups
Our Terraform scripts will create a specific VPC for the EKS cluster. For our Cassandra nodes to communicate with the K8ssandra nodes, we’ll need to create a peering connection between both VPCs. Follow the documentation provided by AWS on this topic to create the peering connection: VPC Peering Connection.
Once the VPC peering connection is created, and the route tables are updated in both VPCs, update the inbound rules of the security groups for both the EC2 Cassandra nodes and the EKS worker nodes. You’ll want to accept all TCP traffic on ports 7000 and 7001, which Cassandra nodes use to communicate with each other (unless configured otherwise).
Preparing the Cassandra cluster for the expansion
When expanding a Cassandra cluster to another data center, assuming you haven’t created your cluster with the
SimpleSnitch (otherwise, you’ll have to switch snitches first), you need to make sure your keyspaces use the
NetworkTopologyStrategy (NTS). This replication strategy is the only one that is DC and rack aware. The default
SimpleStrategy will not consider DCs and behave as if all nodes were collocated in the same DC and rack.
cqlsh on one of the EC2 Cassandra nodes to list the existing keyspaces and update their replication strategy.
Several system keyspaces use the special
LocalStrategy and are not replicated across nodes. They contain only node-specific information and cannot be altered in any way.
We’ll alter the following keyspaces to make them use NTS and only put replicas on the existing data center:
system_auth(contains user credentials for authentication purposes)
system_distributed(contains repair history data and MV build status)
system_traces(contains probabilistic tracing data)
Add any other user-created keyspace to the list. Here we only have the
tlp_stress keyspace, created by the tlp-stress tool to generate some data for this migration.
We’ll now run the following command on all the above keyspaces using the existing data center name, in our case
You should make sure to pin client traffic to the
us-west-2 data center by specifying it as the local data center. You can do this by using the
DCAwareRoundRobinPolicy in some older versions of the DataStax drivers or by specifying it as a local data center when creating a new
CqlSession object in the 4.x branch of the Java Driver:
You can find more information in the driver’s documentation.
Deploying K8ssandra as a new data center
K8ssandra ships with cass-operator, which orchestrates the Cassandra nodes and handles their configuration. Cass-operator exposes an
additionalSeeds setting which allows us to add seed nodes that are not managed by the local instance of cass-operator and, by doing so, create a new data center that will expand an existing cluster.
We’ll put all our existing Cassandra nodes as additional seeds, and you shouldn’t need more than three nodes in this list, even if your original cluster is larger. The following
migration.yaml values file will be used for our K8ssandra Helm chart:
Note that the cluster name must match the value used for the EC2 Cassandra nodes, and the data center should be named differently than the existing one(s). We’ll only install Cassandra in our K8ssandra data center, but you could deploy other components during this phase.
Let’s deploy K8ssandra and have it join the Cassandra cluster:
You can monitor the logs of the Cassandra pods to see if they’re joining appropriately:
Cass-operator will only start one node at a time. So, if you get a message that looks like the following, try checking the logs of another pod:
If VPC peering is done appropriately, the nodes should join the cluster one by one, and after a while,
nodetool status should give an output that looks like this:
Rebuilding the new data center
Now that our K8ssandra data center has joined the cluster, we’ll alter the replication strategies to create replicas in the
k8s-1 DC for the keyspaces we previously altered:
After altering all required keyspaces, rebuild the newly added nodes by running the following command for each Cassandra pod:
Once all three nodes are rebuilt, the load should be similar on all nodes:
Note that K8ssandra will create a new superuser and that the existing users in the cluster will be retained as well after the migration. You can forcefully recreate the existing superuser credentials in the K8ssandra data center by adding the following block in the “cassandra” section of the Helm values file:
Switching traffic to the new data center
Client traffic can now be directed at the
k8s-1 data center, the same way we previously restricted it to
us-west-2. If your clients are running from within the Kubernetes cluster, use the Cassandra service exposed by K8ssandra as a contact point for the driver.
If the clients are running outside of the Kubernetes cluster, you’ll need to enable Ingress and configure it appropriately (which is outside the scope of this blog post).
Decommissioning the old data center and finishing the migration
Once all the client apps/services have been restarted, we can alter our keyspaces to only replicate them on
Then ssh into each of the Cassandra nodes in
us-west-2 and run the following command to decommission them:
They will appear as leaving (UL) while the decommission is running:
The operation should be fairly fast as no streaming will take place since we no longer have keyspaces replicated on
Once all three nodes were decommissioned, we should be left with the
k8s-1 data center only:
As a final step, we can now delete the VPC peering connection as it is no longer necessary.
Note that the cluster can run in hybrid mode for as long as necessary. There’s no requirement to delete the
us-west-2 data center if it makes sense to keep it alive.
In this post, I’ve illustrated that it’s indeed possible to migrate existing Cassandra clusters to K8ssandra without downtime by leveraging flat networking. This allows Cassandra nodes running in VMs to connect to Cassandra pods running in Kubernetes directly. If you haven’t explored K8ssandra yet, I strongly encourage you to check it out!