I read a blog from Google Cloud Platform discussing how to load balance between multiple clouds with Kubernetes (K8s) using Cloudflare Load Balancer and I decided to see how easy it was to setup. I always thought that migrating to multiple clouds for many organizations would ultimately lead to less vendor lock in and more resiliency in services. One of the benefits of K8s is its portability, you can run compliant K8s clusters on premise or one of the major Cloud vendors. As and added bonus, I decided to give Microsoft Azure a spin, since I am have never used it and wanted to see how quickly spin up an Kubernetes cluster on two clouds. So, I was able to quickly spin up Kubernetes Clusters on Azure using Azure Kubernetes Services and Google Cloud Platform’s Google Kubernetes Engine.
First, a few scenarios where multi cloud may make sense.
Exit the Data Center, Enter the Clouds
Perhaps, you have a suite of on premise applications that vary in need for resources and you anticipate the need to spike during holiday seasons or other events. Since you don’t have the resources on premise you spin up a Kubernetes cluster with cloud vendor X. You could use Cloudflare Load Balancer in front of your on premise and cloud Kubernetes deployments. This would allow you to migrate to the cloud from on premise in a more seamless manner. You could move your on premises services in phases and cut over to multiple clouds and wind down your data center resources. Or you could use it to experiment with a certain cloud vendor. Kubernetes portability and compliance affords you less work for deployments and cloud vendor specific knowledge.
Everything falls apart
Before experimenting with Multi Cloud K8S, I used to pitch the idea of using multiple clouds for resiliency and more often than not I was countered with AWS has X many Regions and Availability Zones. Yes, yes it does. But what if it failed for some reason. Do your applications need to run because it would cost your organization money if it didn’t? Do lives depend on your applications running? Do you like to sleep well at night? In my limited experience, software and hardware fails for all kinds of reasons. From simple mistakes, no renewing an SSL certificate to switches routers just dying. Hard drives, they fail all the time. Expecting failure and thinking about what and how things can fail is a mindset. When I started building microservices, I read Production Ready Microservices, discusses resiliency on all levels. I started asking questions of my tech stack. On one project, I found out early that a large suite of applications I was supporting ran on 6 virtual machines. Then I found out all 6 VMs were on the same physical hardware. So, if that hardware went down…yeah. I then learned about affinity rules.
If you wanted to you could setup Cloudflare to load balance between on premise, Google Cloud Platform, AWS, Azure, etc. Kubernetes workloads. Whether you need this type of resiliency depends on the nature of your services and what it means when the services are not up.
Choose the Very Best One
There may be forces that are not technical in nature that may lead you to choose a cloud vendor. You can take the portable K8s deployments and leave a cloud vendor. Setting up Cloudflare Load Balancing even with one cloud can make it easy to switch over to another cloud vendor. Reasons for moving off of a vendors’ cloud: too expensive, legal rules that affect where data can be stored, or wanting to go green with another vendor,
The great mistake is to anticipate the outcome of the engagement; you ought not to be thinking of whether it ends in victory or defeat. Let nature take its course, and your tools will strike at the right moment. — Bruce Lee
Kubernetes is not applicable to every type of application. It make make perfect sense to use AWS Lambdas for aspects of your software environment. You may find it better to use Google App Engine for the performance review app that is only used one month a year since it can scale to zero. You will have to determine how and when and if it makes sense to use a multi cloud approach in migrating to the cloud, staying in the cloud, or getting ahead in the cloud.
Here are the incantations for setting up a Cloudflare Load Balancer with Azure AKS and Google Cloud Platform.
Setting up a Kubernetes Cluster with Google Cloud Platform
# Google Cloud Platform - gcloud cli required
gcloud auth login
gcloud components update
gcloud config set project k8s-cloud-hopper
gcloud compute zones list
# Create Kubernetes Cluster with 3 nodes
gcloud container clusters create k8s-cluster-water --num-nodes=3 --region us-east4-a
gcloud container clusters describe k8s-cluster-water --region us-east4-a# Run and deploy your application expose service
kubectl run water-nginx --image=nginx --port 80
kubectl expose deployment water-nginx --type=LoadBalancer --name=nginx-service
Setting up a Kubernetes Cluster with Microsoft Azure
# Get the Azure CLI
brew update && brew install azure-cliaz login# Create Cluster
az aks create --resource-group FireAKSCluster --name FireAKSCluster --node-count 3 --enable-addons monitoring --generate-ssh-keys# Get coffee - cluster creation took awhile## Connect to Cluster## install kubectl
az aks install-cliaz aks get-credentials --resource-group FireAKSCluster --name FireAKSCluster# Verify
kubectl get nodes# Run and deploy your application expose service
kubectl run fire-nginx --image=nginx --port 80
kubectl expose deployment fire-nginx --type=LoadBalancer --name=nginx-service#Scale
kubectl scale deployment fire-nginx --replicas=3# Cleanup
az group delete --name FireAKSCluster --yes --no-wait
Once you are working with Kubernetes it is the same everywhere. Definite advantage when working across clouds.
Setting up Cloudflare Load Balancer is pretty simple. Get an account, setup your site to point to Cloudflare name servers and begin configuration:
You setup the origin pools with the address of your Kubernetes deployment and setup health checks. I decided to start with GCP first and failover to Azure. Switching is easy, just change the order.
Looking forward to experimenting more with Kubernetes, distributed systems, and multi cloud deployments.