EXPEDIA GROUP TECHNOLOGY — ENGINEERING

Karmada A Multi-Cloud, Multi-Cluster Kubernetes Orchestration: Part-2

Manage multi-cloud, multi-cluster Kubernetes clusters with Karmada

Rajatporwal
Expedia Group Technology

--

A couple sit together on a red rocky landscape amiring the panoramic view.
Photo by Nicole Geri on Unsplash

This is the second part in a two-part series. In the first part, we discussed the motivation for multi-cloud, multi-cluster Kubernetes and How Karmada can orchestrate Application workloads into various clusters as per our requirements. We also discussed the concepts, architecture and features of Karmada.

In this part, we will do some practical hands-on with Karmada where we will try to deploy applications into multiple clusters and will explore various options/features provided by Karmada.

Environment setup

We are going to set up a lab environment using Mac OS X. If you prefer, you can instead use a Linux operating system.

Here, we are installing karmada control plane components in a Kubernetes cluster which is known as a host cluster. Then we will be joining three member clusters with a host cluster. member1 and member2 clusters will join the host in Push mode while member3 will join the host in Pull mode.

Prerequisites

Install the Karmada control plane

  1. Clone this repo to your machine:
$ git clone https://github.com/karmada-io/karmada

2. Deploy and run the Karmada control plane:

$ cd karmada
$ hack/local-up-karmada.sh

This script will do the following tasks for us:

  • Start a Kubernetes cluster to run the Karmada control plane, also known as the host cluster.
  • Build Karmada control plane components based on a current codebase.
  • Deploy Karmada control plane components on the host cluster.
  • Create member clusters and join Karmada.

If everything goes well, at the end of the script output, you will see similar messages as follows:

Local Karmada is running.

To start using your Karmada environment, run:
export KUBECONFIG="$HOME/.kube/karmada.config"
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.

To manage your member clusters, run:
export KUBECONFIG="$HOME/.kube/members.config"
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.

There are two contexts in Karmada:

  • karmada-apiserver kubectl config use-context karmada-apiserver
  • karmada-host kubectl config use-context karmada-host

The karmada-apiserver is the main kubeconfig to be used when interacting with the Karmada control plane, while the karmada-host is only used for debugging Karmada installation with the host cluster. You can check all clusters at any time by running the kubectl config view. To switch cluster contexts, run kubectl config use-context [CONTEXT_NAME].

Note that although we have a new context karmada-apiserver, this is not an actual Kubernetes cluster. Rather, it is a karmada control plane API server running inside the karmada-host Kubernetes cluster.

To list Karmada control plane components running on the karmada-host cluster, run the below command.

$ kubectl config use-context karmada-host
$ kubectl get pod -n karmada-system

To list the target clusters added to the karmada-apiserver, execute the below command:

$ kubectl config use-context karmada-apiserver
$ kubectl get clusters

There will be three clusters added to the target of karmada-apiserver, those are member1, member2 and member3.

Side note: If we want to federate resources to the host cluster as well as to member1, member2 and member3 (we will not be doing this in the demo), we could add this karmada-host as a target cluster to the karmada-apiserver using the command below.

# Do not run this for the demo!
$ karmadactl join host \
--karmada-context=karmada-apiserver \
--cluster-context=karmada-host

Deploy a multi-cluster application with Karmada

Now we will deploy a sample nginx application into multiple clusters with Karmada.

  1. Change the Kube Context to karmada-apiserver
$ kubectl config use-context karmada-apiserver

2. Create nginx deployment in Karmada API Server

Now, we will create a Nginx Deployment in the Karmada API server. It will create the resource but not propagate nginx deployment to any member clusters until we apply PropagationPolicy.

$ kubectl create -f nginx-deployment.yaml

Nginx Deployment manifest is as below:

3. Create PropagationPolicy that will propagate nginx to member cluster

Karmada provides a lot of configuration in PropagationPolicy to allow various propagation strategies. We will discuss them now.

3.1 Replicated/duplicated multi-cluster Nginx deployment

In this replicated Multi cluster PropagationPolicy, nginx deployment will be duplicated to all the clusters. So when we apply the below PropagationPolicy, all the member clusters will have nginx deployment with two replicas.

Note that in the below PropagationPolicy, we have set the field as replicaSchedulingType: Duplicated

$ kubectl create -f propagationpolicy.yaml

If we see the deployment status, we will see a total of six replicas running (two replicas for each member cluster).

An nginx deployment has 2 replicas on each target cluster

3.2 Divided multicluster Nginx deployment

In the divided multi-cluster propagation policy, the nginx deployment replicas will be divided across all the member clusters. We can also configure the weight of replica distribution across member clusters.

Note that in the below PropagationPolicy, we have set the field as replicaSchedulingType: Divided and have given the replica distribution weight preference.

$ kubectl create -f propagationpolicy.yaml

If we see the status of the deployment, we will see a total of two replicas distributed over member1 and member2 clusters which means that both clusters have one individual replica.

The below image shows the deployment status on the karmada-apiserver.

An nginx deployment having 1 replica on each on member1 and member2 clusters.

The below image shows the deployment status on the member1 cluster.

An nginx deployment has only 1 replica on the member1 cluster

3.3 Deployment propagation to selected clusters only

In the propagation policy, we can select the member clusters where we want our workload to get propagated. This can be achieved by various options offered by Karmada under the ClusterAffinity. These options are:

  • LabelSelector
  • FieldSelector
  • ClusterNames
  • Exclude

Here, I’ll be giving a sample propagation policy based on ClusterNames.

$ kubectl create -f propagationpolicy.yaml

If we check the status of the deployment, we will see that Karmada has propagated the deployment only to the member1 cluster and both the replicas are running there only.

The below image shows the deployment status on the karmada-apiserver.

An nginx deployment with overall 2 replicas on karmada-apiserver.

The below image shows the deployment status on the member1 cluster.

Member1 cluster has 2 replicas of nginx deployment.

4. Override policy to allow configuration overrides per cluster

The OverridePolicy is used to declare override rules for resources when they are propagating to different clusters.

Now with the Override policy's help, we will add a new label env: member1 on nginx deployment propagated to cluster member1 only. If we see the current status of nginx deployment, we will see there are a few labels that are added by Karmada by default.

Image showing additional labels added by karmada on deployment manifest.

We will apply the below Override policy.

$ kubectl create -f overridepolicy.yaml

Now if we see the status of the nginx deployment in the member1 cluster, we should see a new label env: member1 added there.

Image showing a new label env:member1 got added by override policy

Conclusion

With the help of Karmada, it is possible to orchestrate multi-cluster, multi-cloud deployment. Karmada supports various options to propagate the resources in target clusters which can be used as needed. Override policies are very handy to apply cluster-specific configurations to workloads.

Karmada provides many other useful features not covered in this post. Here are some ideas for further reading in the Karmada official documentation.

  • Karmada Handling of Target Cluster Failover
  • Global Search for Resources
  • Descheduler For Rescheduling
  • Cluster Accurate Scheduler Estimator For Rescheduling
  • Schedule based on Cluster Resource Modeling
  • Multi-cluster Service Discovery
  • Multi-cluster Ingress

https://careers.expediagroup.com/life/

--

--