Google Cloud Anthos Series - 2: Setting up Multi-Cloud between AWS & Google Cloud
In the previous blog post, we delved into the significance of Anthos and how it addresses challenges related to multi-cloud and hybrid environments. If you haven’t had a chance to read the previous article, “Google Cloud Anthos Series - 1: Introduction to Anthos” I highly recommend doing so before continuing here.
As the need for Anthos arises due to the increasing prevalence of multi-cloud and hybrid setups. Many enterprises utilize a combination of public clouds, private clouds, and on-premises infrastructure to cater to their diverse requirements. However, managing and orchestrating applications across these disparate environments can be quite challenging, where Anthos simplifies this complexity by offering a consistent and streamlined approach to application development and deployment.
Key Components & Terminology of Anthos
Before diving into the implementation aspect, let’s familiarize ourselves with some essential components of Anthos.
- Anthos Cluster(AKA Google Kubernetes Engine)
- Anthos Fleet(Unified Management Platform for Anthos)
Both of these components are very important to understand before working with Anthos so let’s start exploring both of these components one by one.
1. Anthos Clusters (Google Kubernetes Engine)
As we know, every cloud provider offers its own managed version of Kubernetes & You can use platform-specific clusters like EKS from AWS & AKS from Azure for your workloads present in multi-cloud environments, so why Google Cloud provided its own managed k8s offering for other cloud infrastructure called Anthos Clusters(GKE) because as we all know, Google’s Kubernetes service is one of the best-managed k8s offering nowadays, which offers below features and that is why most of the organizations wanted to use the same fully managed offering in their architecture.
One of the key components of Anthos is Google Kubernetes Engine (GKE), a managed container orchestration system based on Kubernetes. GKE plays a pivotal role in enabling seamless application deployment and scalability across different environments.
Note: In the further demo implementation of Anthos on AWS, we will deploy GKE(Anthos Cluster) on AWS environment.
2. Anthos Fleet
Anthos Fleet is a crucial component of Anthos that serves as the management platform for clusters across various sources, including on-premises environments, AWS, Azure, and Google Cloud. It provides a unified approach to managing heterogeneous environments, enabling centralized control and consistent operations.
With Anthos Fleet, you can manage and monitor multiple clusters from different sources through a single interface. It allows you to define and enforce policies, configures settings, and perform cluster-wide operations across your entire Anthos environment. This centralized management capability simplifies the administration and governance of diverse clusters.
Anthos Fleet supports a declarative approach to configuration management, where you define the desired state of your clusters and Anthos Fleet ensures that the clusters adhere to those specifications. It leverages Kubernetes and GitOps principles to enable consistent and automated cluster management.
By using Anthos Fleet, you can achieve the following benefits:
- Unified Management: Anthos Fleet provides a consistent management experience across clusters, regardless of their underlying infrastructure sources. It abstracts the complexities of managing different environments and enables you to apply consistent policies and configurations.
- Policy Enforcement: Anthos Fleet allows you to define and enforce policies across your clusters. You can ensure compliance with security standards, resource allocation rules, and other organizational policies, promoting a consistent and secure environment.
- Configuration Synchronization: With Anthos Fleet, you can synchronize configurations across clusters, ensuring that they stay consistent and aligned with the desired state. This simplifies configuration management, reduces manual efforts, and minimizes configuration drift.
- Operational Efficiency: Anthos Fleet streamlines cluster operations by providing a unified interface for tasks like upgrading Kubernetes versions, applying patches, or scaling clusters. It simplifies cluster-wide operations, saving time and effort for administrators.
- Monitoring and Insights: Anthos Fleet integrates with monitoring and observability tools, enabling you to gain insights into the health and performance of your clusters. You can monitor resource utilization, troubleshoot issues, and receive alerts from a centralized dashboard.
Okay, here we’re done with learning about key components, so let’s start with how to set up Anthos on AWS.
Multi-Cloud Setup & Configuration with AWS
There are two ways you can utilize Anthos on AWS:
1. Anthos clusters(GKE) on AWS
Google Cloud’s managed k8s offering(GKE) is the core of this setup, but here we’ll use AWS as an infrastructure provider & we’ll utilise different resources like compute offering(EC2s for creating master & slave nodes), AWS IAM(to enforce IAM capabilities) & Networking offering(for creating VPCs & its subnets, Gateways etc. for networking capabilities) so there will be lot’s of components are being utilised for this set up & if we do this manually then it’s a tedious task whenever we get into any issue so to simplify this process we’ll go with IAC tool(Terraform) for setting up our end to end Anthos on AWS setup.
Please find the details in the official documentation related to the resources being utilised in this setup.
As you can in the architectural diagram, first we will create network resources like VPC, its subnets, gateways, and security groups & then some compute & IAM-related resources for Anthos cluster installation & setup.
To simplify the process, I already created a repo for the same setup & demonstration video, which you can utilize.
Pre-requisite Step:
- Authentication & Authorization with AWS
Before moving forward to the execution part make sure you’re authenticating yourself in your terminal as an authorized user, who can able to create resources mentioned in the architecture diagram. for this you need to set up two environment variables in your terminal—
export AWS_ACCESS_KEY_ID=<replace it with your access key for the user>
export AWS_SECRET_ACCESS_KEY=<replace it with your secret key for the user>There are many other ways you can authenticate yourself in your terminal so for other methods, please follow the link, also
- Authentication & Authorization with Google Cloud
You’ll also require access to the Google Cloud environment so please follow this link for authentication with Google Cloud.
Note: for the demo, I’ve used Administrator/Owner access credentials for AWS & Google Cloud sides because it has full access to all resources on both sides but for best practices, you should use fine-grained prevideges to perform this activity, follow this link for detailed information.
- Required dependent tools
In the demo I used Google cloud shell as my local terminal which already comes with most of the commonly used developer tools like Terraform, git, helm, kubectl, Google Cloud SDK etc pre-installed so I highly suggest using Google Cloud shell for this setup.
- Clone the repo & understand the repo hierarchy
git clone https://github.com/abhishek7389/anthos-on-aws-terraformAs soon as you clone the repo & get into the folder you’ll see I’ve created a generalized terraform folder structure used in most of the organizations(Root & calling module structure).
cd anthos-on-aws-terraformHere, the modules folder includes all the dependent root modules required for setting up Anthos clusters & AWS networking-related resources like VPC, subnets, gateways & security groups.
anthos-on-gke & vpc folder consists of calling modules for the creation of relevant mentioned resources.
Step 1.1: Creation of AWS Networking Resources
In reference to the above architecture, you need to create AWS networking resources like VPC, Subnets, Gateways etc. & for performing the same you just need to edit a few files under the vpc folder & we’re ready to create this setup. Under vpc folder, you’ll see generalized practice Terraform files like variables.tf, backend.tf, main.tf & so on.
Here you just need to change backend.tf, using this you’ll define, where your terraform state files are gonna stored so please edit this file & provide some information related to your backend bucket, apart from this file you also need to edit terraform.tfvars.sample file, first rename this file to terraform.tfvars & provide the required info asked in the file. This includes all the info related to networking setup & its dependencies.
cd vpc
vi backend.tf # you can use any editor of your choiceterraform {
backend "gcs" {
bucket = "<YOUR BACKEND GCS BUCKET NAME FOR STORING TF STATEFILES>"
prefix = "demo/vpc" #Path in which you state files will be stored
}
}Note:
1. To edit type “i” & after editing type “esc” & then “:wq” to save to exit from the vi editor
2. For the demo, I’m storing statefiles in GCS bucket so make sure you’re having access to the cloud storage bucket in which you wanted to store state files while replicating the demo in your own environment.
Additionally you can store state file locally or other platform’s object storage bucket like s3 based on your flexibility.
mv terraform.tfvars.sample terraform.tfvars
vi terraform.tfvarsregion = "<YOUR AWS VPC REGION>"
vpc_main = {
name = "<YOUR AWS VPC NAME>",
cidr = "10.0.0.0/16",
azs = ["ap-south-1a", "ap-south-1b"],
private_subnets = ["10.0.0.0/24", "10.0.1.0/24", "10.0.3.0/24"], # you can change these CIDR ranges based on your requirement
private_subnet_name_tags = ["<YOUR AWS PRIVATE VPC SUBNET NAME>", "<YOUR AWS PRIVATE VPC SUBNET NAME>", "<YOUR AWS PRIVATE VPC SUBNET NAME>"]
public_subnets = ["10.0.2.0/24"]
public_subnet_name_tags = ["<YOUR AWS PUBLIC VPC SUBNET NAME>"]
}
tags = {
env = "demo",
region = "ap-south-1", # these tags are optional
}Note: to edit type “i” & after editing type “esc” & then “:wq” to save to exit from the vi editor
After providing the necessary details, we’re ready for the creation of networking infrastructure using terraform.
Now, let’s run Terraform to create networking resources —
- Download the required terraform modules & packages
terraform init2. Validate the deployments
terraform plan3. Deploy the networking resources
terraform applyStep 1.2: Deployment of Anthos Cluster on AWS
We’re ready for Anthos cluster deployment in the AWS environment as we already have network resources created from the previous step & the way we edited the terraform files under vpc folder, here we need to edit in the same way but there will be one change that here we will edit data.tf as well, using this we will provide info to terraform related to networking details which we created in the previous step.
cd ../anthos-on-gke
vi data.tfmodule "gcp_data" {
source = "../modules/gke_on_aws/gcp-data"
gcp_location = var.gcp_location
gcp_project = var.gcp_project_id
}
data "terraform_remote_state" "vpc" {
backend = "gcs"
config = {
bucket = "<YOUR GCS BUCKET NAME>" # Same bucket name used in previous terraform execution
prefix = "demo/vpc" # This path should be the same as path we used to store network reosurces terraform statefiles
}
}Adding a backend bucket to store state files
vi backend.tf # you can use any editor of your choiceterraform {
backend "gcs" {
bucket = "<YOUR GCS BUCKET NAME>"
prefix = "aws/demo/gke_on_aws" # Path in the bucket, where your statefiles are goning to be stored
}
}Also providing essentials values to setup the Anthos cluster on AWS
mv terraform.tfvars.sample terraform.tfvars
vi terraform.tfvarsgcp_project_id = "<YOUR GOOGLE PROJECT ID>"
admin_user = "<ANTHOS CLUSTER ADMIN USER EMAIL"
name_prefix = "<YOUR GOOGLE PROJECT ID>"
node_pool_instance_type = "t3.large"
control_plane_instance_type = "t3.large"
no_of_node_pool = 1
cluster_version = "1.23.8-gke.1700"
pod_address_cidr_blocks = ["10.0.4.0/22"]
service_address_cidr_blocks = ["10.0.8.0/22"]
max_pods_per_node = 110
max_node_count = 2
min_node_count = 1
size_gib_main_vol_cp = 40
size_gib_root_vol_cp = 30
size_gib_root_vol_np = 40
iops = 3000
volume_type = "GP3"
#--Use 'gcloud container aws get-server-config --location [gcp-region]' to see Availability --
gcp_location = "asia-south1"
aws_region = "ap-south-1"
tags = {
env = "demo",
region = "ap-south-1",
}You can provide the parameters present in the above files based on your requirement & use cases, now we’re ready for the execution part.
- Download the required terraform modules & packages
terraform init2. Validate the deployments
terraform plan3. Deploy the networking resources
terraform applyNote: This execution will take around 5–10 mins so please be patience & don’t close the terminal while execution of the terraform scripts.
After the successful execution of the terraform scripts, you’ll find one connect command to your cluster, you can use that command to connect to your Anthos on aws cluster using the admin account, which we mentioned while editing terraform.tfvars file under anthos-on-gke folder.
gcloud container hub memberships get-credentials <REPLACE IT WITH YOUR CLUSTER NAME>The above command will configure cluster’s credentials in your local terminal by utilizing anthos connect feature mentioned like below
Now, you can play with your cluster using the terminal via kubectl utility like the below -
We talked a lot about Anthos fleet & it’s feature in the previous steps so now it’s time to see that in action, actually how it looks like & how it will be helpful for us while managing this heterogenous environment.
Google Cloud Console -> Search Bar -> search for anthos overviewThis is the Anthos overview page, where you can see all the generalized details related to added clusters with different platforms, for more details click on clusters from left menu bar.
Here you can see, our Anthos cluster, which is present in AWS environment is added here, to see more details about this cluster, you can simply click on the cluster name.
If you click on view more details, you’ll see cluster is created with the same configurations provided while executing terraform scripts, also it is leveraging AWS as infrastructure provides for computing, networking & other resources.
You can see in the screenshot, all the underlying resources like role, compute instance type, encryption key, and subnet all details are from AWS itself but this can be manageable from Google Cloud as we’re working multi-cloud with Anthos.
In the last blog, we talked about lots of amazing features provided by Google Cloud Anthos, so how to enable & use those features, in the previous screenshot there is one section called Manage feature, just click on that you’ll see a management page from where you can manage all the Anthos feature for your cluster.
Note: we’ll explore each & every feature in upcoming blogs so stay tuned to see all features in further videos & blog posts.
Step 1.3: Log in & Validation of Anthos Cluster
Whenever you get into the Kubernetes engine page of your Google Cloud, you’ll see that it will ask you to log in.
Google Cloud Console -> Search Bar -> search for GKE & Click on kubernetes EngineHere you need to log in to the cluster using the admin email, which you provided while registering your cluster.
Once you are logged in, you’re ready for the deployment inside your Anthos Cluster
Step 1.4: Sample Deployment in Anthos Cluster
On the same page, under workloads, you’ll find the option to deploy workloads (This is the same method, you use to deploy any workload in GKE using Google Cloud Console)
Once your deployment is completed, You can validate that all the resources used to deploy this deployment are from AWS environment itself so this is managed from Google Cloud via Anthos but underlying dependencies are still from AWS.
As you can see, this expose endpoint is the same as you deploy some endpoint in AWS environment, you can just hit enter in your browser & this will give you the output of your deployment.
You can validate the same from the command line as well using kubectl utility inside your existing terminal.
Note: You can validate the same in your AWS environment as well, whatever resources you create, all the resources will be available on AWS side but you can fully manage those resources from Google Cloud
Wooh!! We’re done with the implementation & setup of Anthos clusters on AWS
2. Amazon EKS Cluster with Anthos
There is another way you can utilize Anthos on AWS, which is registering your existing EKS cluster to Anthos & manage it from Google Cloud using Anthos.
As you know, we already set up AWS credentials in our Google Cloud Shell so now we just need to configure our terminal so that it can able to access our existing EKS cluster by running the below command.
aws eks update-kubeconfig --region <YOUR EXISITNG EKS CLUSTER REGION> --name <YOUR EXISTING EKS CLUSTER NAME>Note: If you get the error related AWS command not then please run below command to install it & then rerun above command.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --versionConfigurations & Setup
Here we’re going to use the manual CLI method to attach our existing EKS cluster to Google Cloud Anthos.
We can’t directly attach any EKS cluster to Anthos because might be there will be a compatibility issue so to get a supported version in a specific region you can run the below command. This will provide the supported platform versions of Anthos attached clusters to be installed on your EKS cluster.
gcloud container attached get-server-config \
--location=<YOUR GOOGLE CLOUD REGION IN WHICH YOU WANT TO ADD YOUR EKS CLUSTER>Note: Anthos attached clusters uses the same version numbering convention as GKE — for example, 1.21.5-gke.1. When attaching or updating your cluster, you must choose a platform version whose minor version is the same as or one level below the Kubernetes version of your cluster. For example, you can attach a cluster running Kubernetes v1.22.* with Anthos attached clusters platform version 1.21.* or 1.22.*.
Step 2.1: Retrieve the OpenID connect issuer URL
aws eks describe-cluster \
--region <YOUR EXISTING EKS CLUSTER REGION>\
--name <YOUR EXISTING EKS CLUSTER NAME> \
--query "cluster.identity.oidc.issuer" \
--output textThis will provide a OIDC URL(issuer-url= https://oidc.eks.<EKS REGION>.amazonaws.com/id/<ID TOKEN>), which will be used in further steps so please make a note of that URL.
Step 2.2: Extract & Setting up Cluster Context in the environmental variable
KUBECONFIG_CONTEXT=$(kubectl config current-context)Step 2.3: Extracting Project number
gcloud projects list \
--filter="$(gcloud config get-value project)" \
--format="value(PROJECT_NUMBER)"Step 2.4: Register the Cluster
gcloud container attached clusters register <YOUR EXISTING EKS CLUSTER NAME> \
--location=<YOUR EXISTING EKS CLUSTER REGION> \
--fleet-project=<YOUR GOOGLE CLOUD PROJECT NO. FROM HOME PAGE OR FROM THE ABOVE COMMAND> \
--platform-version=<ANTHOS SUPPORTED VERSION OF ANTHOS FROM INITIAL STEP OF EKS SETUP> \
--distribution=eks \
--issuer-url=<OUTPUT OF STEP 1> \
--context=$KUBECONFIG_CONTEXT \
--admin-users=<YOUR EMAIL ID> \
--kubeconfig=$KUBECONFIG_PATH \
--description=EKSThis will register your existing EKS cluster to Google Cloud Anthos.
Step 2.5: Enable Logging & Monitoring
gcloud projects add-iam-policy-binding <REPLACE IT WITH GOOGLE_PROJECT_ID> \
--member="serviceAccount:<REPLACE IT WITH GOOGLE_PROJECT_ID>.svc.id.goog[gke-system/gke-telemetry-agent]" \
--role=roles/gkemulticloud.telemetryWriterAfter the successful execution of this command, you can able to see your attached cluster on the Anthos Overview page as we saw after terraform execution in the previous method.
Google Cloud Console -> Search Bar -> search for anthos overviewHere it is asking for login to the cluster, the same as you’ll face in the above method so you just need to go to the Kubernetes engine page of your Google Cloud & log in to the cluster using the admin email, which you provided while registering your cluster.
Google Cloud Console -> Search Bar -> search for GKE & Click on kubernetes EngineStep 2.5: Log in & Validation of EKS Cluster in Anthos
For the validation & testing, please repeat this step from the previous setup of Anthos Clusters on AWS.
sample deployment of customized image from Google Container Registry.
Once this will be deployed you can able to see the details
Once you click on the endpoint mentioned above in the browser, you can see your application
Now, let's validate the same on AWS side
AWS management Console -> Search Bar -> EKS -> click on your EKS cluster
-> resoruces -> under pods(Filter for namespace(We've deployed on default namespace))
-> under service & networking -> service/endpointsWhatever resources, you deploy here using Google Cloud, you’ll find all the resources in your EKS cluster description page as well.
Here you can see, we have the same information present on the Google Cloud side & in this multi-cloud setup we’ve our resources on the AWS side but we’re managing those resources from Google Cloud, In this way, you can manage your heterogenous environment from a single plane(Google Cloud Anthos).
References
Official Google Cloud documentation for Anthos on AWS
Congratulations !! You’ve completed multi-cloud setup between AWS & Google Cloud using Anthos & I hope you found this blog helpful & excited about the upcoming Anthos feature integration with this Setup so that we can create end-to-end Production Grade Multi-Cloud.
Let’s stop here for now, but this series includes all the features explanation with a sample demo so stay tuned & I’m very happy to see what use case you’re going to build for a better tomorrow & I’ll be part of this amazing journey.
Thank you for reading……(Our next blog is on Multi-Cloud Setup with other cloud platforms using Google Cloud Anthos with the addition of some amazing Anthos features to make production-ready environment)
If you have any queries regarding the article, I’m very happy to read them in the comment section or you can reach out to me via LinkedIn

