EXPEDIA GROUP TECHNOLOGY — ENGINEERING

Karmada A Multi-Cloud, Multi-Cluster Kubernetes Orchestration: Part-1

Manage multi-cloud, multi-cluster Kubernetes clusters with Karmada

Rajatporwal
Expedia Group Technology

--

Two people riding a scooter away from the camera on a mountain road. The photograph appears to have been taken by the pillion passenger who can be seen holding a selfie stick.
Photo by Jordan Opel on Unsplash

Application containerization is the modern way of building and deploying software applications. Over the years, Kubernetes has stood out as one of the best platforms for container orchestration. A single cluster is easy to set up and manage and provides the basic features of Kubernetes but it lacks the typical resilience and high availability Kubernetes is famous for. In many cases, a single cluster is not enough to manage the load efficiently across all components. As a result, we need more than one cluster for a better division of workload and resources, hence the need for a multi-cluster solution.

In this article, we will discuss what is multi-cluster setup, why we need this and how Karmada allows us to run containerized applications across multiple Kubernetes clusters and clouds.

What is multi-cluster Kubernetes

Multi-cluster is a strategy for deploying an application on or across multiple Kubernetes clusters. This helps us to improve the availability, isolation, and scalability of applications. Multi-cluster can also be important to ensure compliance with different and conflicting regulations, as individual clusters can be adapted to comply with geographic regulations. The speed and safety of software delivery can also be increased, with individual development teams deploying applications to isolated clusters and selectively exposing which services are available for testing and release.

Multi-cluster application architecture

Multi-cluster applications can be architected in two fundamental ways:

1. Replicated

In this model, each cluster runs a full copy of the application. This simple but powerful approach enables an application to scale globally, as the application can be replicated into multiple regions or clouds and user traffic routed to the closest or most appropriate cluster. Coupled with a health-aware global load balancer, this architecture also enables failover.

2. Split-by-service

In this model, the application is divided into multiple components or services and distributed them across multiple clusters. This approach provides stronger isolation between parts of the application at the expense of greater complexity.

Benefits of multi-cluster Kubernetes

  • Increased scalability & availability
  • Application isolation
  • Security and Compliance

Till now we got the idea of what is multi-cluster kubernetes, why we need this and how it helps us to deploy applications with scalability, availability, isolation, security and compliance. Now we will understand how Karmada helps us to orchestrate multi-cluster Kubernetes into multi-clouds.

Introduction to Karmada

Karmada (Kubernetes Armada) is a Kubernetes management system that enables us to run our cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to our applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.

Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.

Architecture of Karmada

The architecture of Karmada is similar to that of a single Kubernetes cluster in many ways. Both of them have a control plane, an API server, a scheduler, and a group of controllers.

The architecture of Karmada shows control plane components and it’s interaction with target clusters
Source: https://karmada.io/docs/core-concepts/architecture

The Karmada Control Plane consists of the following components:

  • Karmada API Server provides Kubernetes native APIs and policy APIs extended by Karmada.
  • Karmada Scheduler focuses on fault domains, cluster resources, Kubernetes versions, and add-ons enabled in the cluster to implement multi-dimensional, multi-weight, and multi-cluster scheduling policies.
  • Karmada Controller Manager runs various controllers, which watch karmada objects and then talk to the underlying clusters’ API servers to create regular Kubernetes resources.
  • ETCD stores the karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager performs operations based on the API objects you created through the API server.

Karmada concepts

Here we will discuss the key concepts in Karmada.

  1. Resource Template: Karmada uses the Kubernetes Native API definition for the federated resource template, to make it easy to integrate with existing tools that have already been adopted by Kubernetes.
  2. Propagation Policy: Karmada offers a standalone Propagation (placement) Policy API to define multi-cluster scheduling and spreading requirements. It supports 1:n mapping of policy: workload. Users don’t need to indicate scheduling constraints every time a federated application is created.
  3. Override Policy: Karmada provides a standalone Override Policy API for specializing in the automation of cluster-related configuration. For example, override the image prefix based on the member cluster region.
This shows the core concepts and terminologies used in Karmada and their overall workflow in the Karmada system.
Source: https://karmada.io/docs/core-concepts/concepts

Key features of Karmada

Cross-cloud multi-cluster multi-mode management

  1. Safe isolation by Creating a namespace for each cluster, prefixed with karmada-es-*
  2. Karmada supports multi modes (Push and Pull mode) connection with Target Clusters. In Push mode, Karmada is directly connected to the target cluster’s kube-apiserver while in Pull mode there is an agent component in the target clusters, Karmada delegates tasks to the agent component.
  3. Multi-cloud support (Only if compliant with Kubernetes specifications).
Karmada control plane is connected Kubernetes clusters hosted in various cloud providers
Source: https://karmada.io/docs/key-features/features

Multi-policy multi-cluster scheduling

  1. Karmada has various distribution capabilities of workloads into various clusters under different scheduling strategies like ClusterAffinity, Tolerations, SpreadConstraint and ReplicasScheduling.
  2. Karmada supports having a different configuration of applications per cluster by leveraging Override Policies.
  3. Karmada has a re-scheduling feature that triggers workload rescheduling based on instance state changes in member clusters.

Much like k8s scheduling, Karmada supports different scheduling policies. The overall scheduling process is shown in the figure below:

Resources are getting scheduled by Karmada scheduler and cluster specific config is getting an overwrite by an Override Policy.
Source: https://karmada.io/docs/key-features/features

Cross-cluster failover of applications

  • Cluster Failover: Karmada supports users to set distribution policies, and automatically migrates the faulty cluster replicas in a centralized or decentralized manner after a cluster failure.
  • Cluster taint: When the user sets a taint for the cluster and the resource distribution strategy cannot tolerate the taint, Karmada will also automatically trigger the migration of the cluster replicas.
  • Uninterrupted service: During the replicas migration process, Karmada can ensure that the service replicas do not drop to zero, thereby ensuring that the service will not be interrupted.

Karmada supports failover for clusters, one cluster failure will cause a failover of replicas as follows:

This shows how karmada is handling the Cluster failure by moving the replicas to available clusters
Source: https://karmada.io/docs/key-features/features

The user has joined three clusters in Karmada: member1, member2, and member3. A Deployment named Foo, which has 6 replicas, is deployed on the karmada control plane. The deployment is distributed to cluster member1 and member2 by using PropagationPolicy.

When cluster member1 fails, pod instances on the cluster are evicted and migrated to cluster member2 or the new cluster member3. This different migration behaviour can be controlled by the replica scheduling policy ReplicaSchedulingStrategy of PropagationPolicy/ClusterPropagationPolicy.

Cross-cluster service governance

  1. Multi-cluster service discovery: With ServiceExport and ServiceImport, achieving cross-cluster service discovery.
  2. Multi-cluster network support: Use Submariner or other related open-source projects to open up the container network between clusters.

Users can enable service governance for cross-cluster with Karmada:

Kubernetes service created in one cluster is getting imported into other target cluster and can be accessed as a local service in imported cluster.
Source: https://karmada.io/docs/key-features/features

In the above diagram, We have used a submarine to connect networks between member clusters.

Application foo and its service svc-foo are deployed to the member1 cluster along with the ServiceExport resource. In the member2 cluster, the ServiceImport resource is created and now karmada will import the service svc-foo into member2 named derived-svc-foo. Now any application in the member2 cluster can call this service endpoint internally for accessing the service svc-foo of member1.

In this Part, We have learnt about the need for multi-cluster Kubernetes setup, architecture, strategies, benefits of multi-cluster Kubernetes and how Karmada orchestrates the workloads in multi-clusters, multi-cloud Kubernetes setup. Also, We have learned about the Architecture of Karmada, Its key concepts and Features.

In the next part, we will get our hands dirty with some demos of Karmada and how it orchestrates the workloads into multiple clusters and more practical stuff.

https://careers.expediagroup.com/life/

--

--