Google Cloud Anthos Series -Anthos Multi-cluster Ingress

Google Cloud Anthos Series : Part-6

Shijimol A K
Google Cloud - Community
7 min readMar 9, 2022

--

Welcome to Part-6 of the “Google Cloud Anthos Series”. You can find the complete series here

Multi Cluster Ingress is a Google-hosted service that helps customers build resilient Anthos architecture through clusters deployed across multiple cloud regions. In addition to resiliency it also helps redirect traffic to the nearest cluster to ensure lowest latency and consistent user experience.Under the wraps, the service leverages Google Front Ends(GFEs) through the External HTTP(s) load balancing service. Any incoming request is routed by GFEs to the GKE cluster closest to the client. Organizations can also choose multi cluster architecture with multi cluster ingress when there are requirements like regional isolation of workloads or data residency

Multi Cluster Ingress Architecture components

Multi-cluster Ingress architecture components and their features are listed below in brief

Multi Cluster Ingress controller(Anthos Ingress Controller)

  • Refers to a control plane outside of the member clusters
  • Leverages External Load balancer of GCP by creating backend Network Endpoint Groups (NEGs)
  • Backend endpoints of the LB dynamically track pod endpoints
  • Tracks pods in clusters and ensures that the load balancer is aware of updates in the cluster

Config cluster

  • Used to deploy custom resource types called MultiClusterIngress and MultiClusterService required for Multi cluster Ingress operations
  • One GKE cluster is designated as the control plane that centrally managed the multi cluster ingress resources
  • MultiClusterService(MCS) represents a service deployed across different cluster that will be exposed by the load balancer.
  • MultiClusterIngress(MCI) is similar to ingress resources in k8s cluster that defined backends, paths, protocols etc that sends matching traffic to MCS

Fleets

  • Formerly known as environ
  • Groups together multiple clusters that will act as backend for Multi Cluster Ingress
  • Supports ‘namespace sameness’ ie resources with same name and same namespace across multiple clusters are grouped as single workload

Member cluster

  • Refers to GKE clusters registered to a fleet
  • Hosts the workloads to which Multi Cluster ingress redirects traffic
  • MCS by default sends traffic to all backend clusters, but this can be customised in MCS config yaml to target specific clusters

High level architecture of Multi cluster ingress to multiple GKE clusters is as shown below:

Ref:Google cloud Multi cluster Ingress documentation

Multi Cluster Ingress deployment

Before we start with the deployment, a word on the pricing. While using only multi cluster Ingress service with GKE clusters, customers can opt for a standalone pricing. This pricing get applied only if Anthos API is disabled. If Anthos API is enabled and you are using other features of Anthos as well, then the billing will depend on the Cluster vCPUs and Anthos Pricing. For this deployment we will be using the Anthos pricing and the Online Boutique application we had used in our earlier Google Cloud Devops blog series

Note: To understand more about Anthos pricing, you can refer to part 3 of this blog series

Deployment considerations

  • Steps to deploy Multi Cluster ingress requires GKE administrator rights on the GCP project as it requires elevated permissions
  • It is recommended to use workload identity for clusters for enabling seemless authentication for workloads in cluster while accessing other GCP resources

Diagram below shows Samajik’s end state architecture using Multi cluster Ingress

Step by step deployment of Multi Cluster Ingress from Cloud shell is given below. Replace variables in the code marked in bold as applicable with values specific to your environment ie project id, cluster name, GCP region etc

  1. Enable APIs required for the deployment . The below command should be run to use Anthos pricing

2. Deploy clusters with workload identity enabled. Here we are deploying three clusters named cluster1, cluster2 and cluster3 across three different regions

3. Retrieve the credentials for the clusters and rename them for ease of use. Repeat the below steps for all the three clusters after replacing the cluster name

4. Add clusters to the fleet using workload identity. Repeat the below command for all three clusters after replacing the cluster name

5. Confirm fleet membership using the below command

Output will be similar to below

6. Configure cluster1 as the config cluster

7. Create a yaml file namespace.yaml that defines the namespace using the below content

8. Switch context to cluster1 and deploy namespace.yaml to create the namespace. Repeat the same steps for cluster2 and cluster3

9. Clone the online boutique microservices demo application from GitHub

10. Edit the kubernetes-manifest. yaml and remove the following block of code that configures load balancer ingress . We will be configuring Multi Cluster Ingress for the frontend service later

11. Deploy the application in all three clusters by switching the context. The below commands will create the deployment in cluster 1. Repeat the same for cluster 2 and cluster 3

12. Once deployed successfully you can see the workloads listed in the portal for each cluster

Note: If you see an error in scheduling pods due to insufficient resources, enable autoscaling or create additional node pools to schedule the pods.

13. Next step is to create a MCS resource that can represent the frontend service across all three clusters. Create a file mcs_boutique.yaml with the below content.

14. Run the below commands only in the config cluster ie cluster 1 to create MCS

15.Run the below command in other clusters after switching context and you can see that an associated headless service is created in all clusters that have the pods with label app:mcinamespace

16. Create a file mci_boutique.yaml using the below content. It will be used to create the MCI service

17.Run the below commands only in the config cluster ie cluster 1 to create MCI

18. Check status of the MCI service using the below command . You should see a VIP created for the service

In addition to VIP you can also see that there are multiple resources created by the MCI controller ie load balancer, backend service, firewall rule, forwarding rule, Network Endpoint Group and health checks. The load balancer can be seen from the Google cloud console as well .

The frontend IP will be same as the VIP shown earlier. You can see that the backend is configured to distribute traffic across NEGs pointing to the three member clusters in different regions

19. The url can be accessed at http://MCI VIP> or http://<loadbalancer frontendIP>

Coming up..

Samajik’s requirement of effortlessly growing their services across several cloud locations is addressed by Multi Cluster Ingress.Guhan is now even more excited to explore other features of Anthos that Samajik can benefit from. Stay tuned to Guhan and Ram’s conversation to learn more…

Contributors : Pushkar Kothavade, Anchit Nishant, Dhandus

--

--