Extending your Istio service mesh across GKE clusters and Compute Engine instances

Ameer Abbas
Google Cloud - Community
12 min readAug 19, 2020

This tutorial shows how to deploy a multi-tier microservices application that spans a Kubernetes cluster and a Compute Engine instance by using a single Istio service mesh over the entire application. This tutorial is intended for Kubernetes operators who have a basic knowledge of Kubernetes concepts. Knowledge of Istio is not required.

I want to give a shout out to Megan O’Keefe (@askmeegs) for her awesome work with Istio, parts of which I shamelessly plagiarized for this tutorial. You can find many Istio related samples maintained by Megan at https://github.com/GoogleCloudPlatform/istio-samples

Istio lets you extend a service mesh that is running inside a Kubernetes cluster to services that run on virtual machine (VM) instances outside of the Kubernetes cluster.

Note: Istio is an open source tool and not an official Google product.

Istio is an open source implementation of a service mesh that lets you discover, dynamically route to, and more securely connect to Services that run on Kubernetes clusters. Istio also provides a policy-driven framework for routing, load balancing, throttling, telemetry, circuit breaking, authenticating, and authorizing service calls in the mesh with few to no changes to your application code.

When you install Istio in a Kubernetes cluster, the Istio control plane uses the Kubernetes service registry to automatically discover and create a service mesh of interconnected Services (or microservices) that are running inside the local cluster. Istio uses Envoy sidecar proxies running inside each Pod to manage Pod-to-Pod traffic routing and security and to provide observability for all Services and workloads that run inside the cluster. You can also deploy Envoy to VMs to extend the service mesh beyond Kubernetes clusters.

Services that are running in one Kubernetes cluster might need to talk to services running in VMs that are outside the cluster. For example, microservices that run inside a Kubernetes cluster might need access to a legacy monolithic service running in a VM or a database cluster running in VMs. Istio lets you create a service mesh beyond a single Kubernetes cluster to include external services that run in VMs, outside of Kubernetes.

Istio provides two main configuration options for mesh expansion deployments:

  • Single network. Both the Kubernetes cluster and VMs are on the same network. Pods in the Kubernetes cluster and VMs can directly access each other by using their IP address.
  • Multiple networks. The Kubernetes cluster and VMs are on separate networks and cannot access each other by using direct IP connectivity. In this scenario, Pods and VMs use a gateway (Istio-managed Envoy edge proxy) to access each other.

In this tutorial, you deploy Istio on a GKE cluster and expand the service mesh to a service running on a Compute Engine instance in a single Virtual Private Cloud (VPC) network. The Istio control plane cluster has direct IP connectivity to the VM through an IP address. For this tutorial, you use a 10-tier sample microservices app called Online Boutique split across a GKE cluster and a Compute Engine instance. You build the following architecture inside a Google Cloud project.

Tutorial Architecture

All Online Boutique Services except productcatalogservice run inside a GKE cluster. The Service productcatalogservice runs on a Compute Engine VM instance. Using Istio mesh expansion, you add productcatalogservice to the service mesh, which contains the rest of the Online Boutique services.

Objectives

  • Create a GKE cluster called west.
  • Install Istio with mesh expansion enabled on the west cluster.
  • Create a Compute Engine instance called istio-gce in the same VPC as the west cluster.
  • Install the Online Boutique service productcatalogservice on the istio-gce instance.
  • Register productcatalogservice to the Istio service mesh in the west cluster.
  • Install the remaining Online Boutique app microservices on the west cluster.
  • Observe the expanded service mesh.

Costs

This tutorial uses the following billable components of Google Cloud:

Use the Pricing Calculator to generate a cost estimate based on your projected usage.

Before you begin

  1. Select or create a Google Cloud project.

GO TO THE MANAGE RESOURCES PAGE

2. Enable billing for your project.

ENABLE BILLING

3. Enable the Kubernetes Engine and Source Repositories API.

ENABLE APIS

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. See Cleaning up for more detail.

Preparing your environment

You run all of the terminal commands in this tutorial from Cloud Shell.

  1. Open Cloud Shell:

OPEN CLOUD SHELL

2. Download the required files for this tutorial by cloning the Git repository:

cd $HOME
git clone https://github.com/GoogleCloudPlatform/istio-multicluster-gke.git

3. Make the repository folder your $WORKDIR, from which you do all the tasks related to this tutorial:

cd $HOME/istio-multicluster-gke
WORKDIR=$(pwd)

You can delete the folder when you’re finished with the tutorial.

4. Install kubectx and kubens:

git clone https://github.com/ahmetb/kubectx $WORKDIR/kubectx
export PATH=$PATH:$WORKDIR/kubectx

These tools let you quickly switch contexts and namespaces, which makes it easier to work with multiple Kubernetes clusters.

Creating the GKE cluster

In this section, you create a GKE cluster in the default VPC with IP Alias enabled. With Alias IP addresses, GKE clusters can allocate IP addresses from a CIDR block known to Google Cloud. This setup results in Pod IPs being routable within the VPC, giving Pods direct IP connectivity to Compute Engine instances in the same VPC.

  1. In Cloud Shell, create a GKE cluster that is called west in the us-west2 region:
gcloud container clusters create west \
--zone us-west2-a --username "admin" \
--machine-type "n1-standard-2" \
--cluster-version 1.14 \
--image-type "COS" --disk-size "100" \
--num-nodes "5" --network "default" \
--enable-cloud-logging --enable-cloud-monitoring \
--enable-ip-alias

2. Connect to the west GKE cluster to generate an entry in the kubeconfig file:

export PROJECT_ID=$(gcloud info --format='value(config.project)')
gcloud container clusters get-credentials west --zone us-west2-a --project ${PROJECT_ID}

You use the kubeconfig file to create authentication to GKE clusters by creating a user and context for every cluster.

3. For convenience, use kubectx to rename the context name:

kubectx west=gke_${PROJECT_ID}_us-west2-a_west

4. Give yourself (your Google user) the cluster-admin role for the cluster:

kubectl create clusterrolebinding user-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account) --context west

This role lets you perform administrative tasks on these clusters.

Installing Istio

In this section, you install and configure Istio with mesh expansion enabled on the west cluster.

Download and install Istio

  1. In Cloud Shell, download Istio:
cd ${WORKDIR}
export ISTIO_VERSION=1.6.5
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -

2. Create the istio-system namespace in the west cluster:

kubectl --context west create namespace istio-system

3. Using the sample certificates provided with the Istio release, create a Kubernetes secret:

kubectl --context west create secret generic cacerts -n istio-system \
--from-file=${WORKDIR}/istio-${ISTIO_VERSION}/samples/certs/ca-cert.pem \
--from-file=${WORKDIR}/istio-${ISTIO_VERSION}/samples/certs/ca-key.pem \
--from-file=${WORKDIR}/istio-${ISTIO_VERSION}/samples/certs/root-cert.pem \
--from-file=${WORKDIR}/istio-${ISTIO_VERSION}/samples/certs/cert-chain.pem

The Istio control plane uses this secret to sign workload certificates.

4. Install Istio:

${WORKDIR}/istio-${ISTIO_VERSION}/bin/istioctl install \
--set profile=demo \
--set values.global.meshExpansion.enabled=true

In this tutorial, you deploy Istio using the demo profile. You can deploy Istio by using any profile or even a custom profile as long as you enable the meshExpansion parameter.

Note: Istio takes 2–3 minutes to install.

5. Ensure that all Istio deployments are running:

kubectl --context west get pods -n istio-system

The output is similar to the following:

OUTPUT (DO NOT COPY)
NAME READY STATUS RESTARTS AGE
grafana-858f9bcbcd-jnjbf 1/1 Running 0 70s
istio-egressgateway-5548786875–4zwcv 1/1 Running 0 72s
istio-ingressgateway-dfcdf4d97–28rmn 1/1 Running 0 72s
istio-tracing-7cf5f46848-jbhsk 1/1 Running 0 70s
istiod-6ff4d846d7-pmtrx 1/1 Running 0 85s
...

Preparing the Compute Engine instance for Istio

  1. Define a namespace that the Compute Engine instance joins:
export SERVICE_NAMESPACE="default"

2. Store the istio-ingressgateway external load balancer IP address in a file:

export ISTIOD_IP=$(kubectl --context west get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $ISTIOD_IP > istiod.txt

When mesh expansion is enabled, Istio exposes the istiod (the Istio control plane) Service through istio-ingressgateway. The Compute Engine instance uses this IP address to access istiod.

3. Get the ClusterIP address range from the west cluster:

ISTIO_SERVICE_CIDR=$(gcloud container clusters describe west --zone us-west2-a --project ${PROJECT_ID} --format "value(servicesIpv4Cidr)")

4. Generate a cluster.env configuration file that you deploy in the Compute Engine instance. This file contains the Kubernetes ClusterIP address ranges to intercept and redirect using Envoy.

echo -e "ISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
echo "ISTIO_INBOUND_PORTS=3550" >> cluster.env

The service running on the Compute Engine instance uses the ISTIO_INBOUND_PORTS ports. In this tutorial, you deploy productcatalogservice to the Compute Engine instance. The Service productcatalogservice runs on port 3550.

5. Generate the certificates that Compute Engine uses:

go get istio.io/istio/security/tools/generate_cert
go run istio.io/istio/security/tools/generate_cert \
-client -host spiffee://cluster.local/vm/vmname --out-priv key.pem --out-cert cert-chain.pem --mode self-signed
kubectl --context west -n istio-system get cm istio-ca-root-cert -o jsonpath='{.data.root-cert\.pem}' > root-cert.pem

To use mesh expansion, you must provision the Compute Engine instance with certificates that are signed by the same root CA as the rest of the mesh.

Install and configure the Compute Engine instance

In this section, you install a Compute Engine instance where one of the Online Boutique app microservices runs.

  1. In Cloud Shell, create a Compute Engine instance named istio-gce. Use a tag with the value of istio-gce. This tag is used in the next step for creating firewall rules.
GCE_INSTANCE_NAME="istio-gce"
gcloud compute --project=$PROJECT_ID instances create $GCE_INSTANCE_NAME --zone=us-west2-a \
--machine-type=n1-standard-2 --subnet=default --network-tier=PREMIUM --maintenance-policy=MIGRATE \
--image-family=ubuntu-1604-lts --image-project=ubuntu-os-cloud --boot-disk-size=10GB \
--boot-disk-type=pd-standard --boot-disk-device-name=$GCE_INSTANCE_NAME --tags="istio-gce"

2. Get the GKE Pod IP CIDR range:

export GKE_POD_CIDR=$(gcloud container clusters describe west --zone us-west2-a --format=json | jq -r '.clusterIpv4Cidr')

3. Create a firewall rule that allows the GKE Pod IP CIDR range to communicate with the istio-gce instance (using the istio-gce tag assigned in the previous step) on TCP port 3550:

gcloud compute firewall-rules create k8s-to-istio-gce \
--description="Allow k8s pods CIDR to istio-gce instance" \
--source-ranges=$GKE_POD_CIDR \
--target-tags="istio-gce" \
--action=ALLOW \
--rules=tcp:3550

The Service productcatalogservice runs in the istio-gce instance. This Service listens on TCP port 3550.

4. Generate an SSH key so that you can connect to the Compute Engine instance using SSH, and add the SSH private key to the SSH authentication agent:

USER=$(gcloud config get-value account)
ssh-keygen -t rsa -N '' -b 4096 -C "$USER" \
-f $HOME/.ssh/google_compute_engine
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/google_compute_engine

5. Create the gce-script.sh file which deploys the Envoy proxy and the productcatalogservice to the istio-gce instance.

export GWIP=$ISTIOD_IP
envsubst < ./istio-mesh-expansion-gce/gce-script-tmpl.sh > ./istio-mesh-expansion-gce/gce-script.sh

6. Using secure copy protocol (SCP), copy the certificates, the istiod.txt file, and and the gce-script.sh shell script to the Compute Engine instance:

gcloud compute scp --project=${PROJECT_ID} --zone=us-west2-a {key.pem,cert-chain.pem,cluster.env,root-cert.pem,istiod.txt,./istio-mesh-expansion-gce/gce-script.sh} ${GCE_INSTANCE_NAME}:~

7. Run the gce-script.sh script in the Compute Engine instance:

gcloud compute --project ${PROJECT_ID} ssh --zone us-west2-a ${GCE_INSTANCE_NAME} --command="chmod +x gce-script.sh; ./gce-script.sh"

8. The gce-script.sh script prepares the istio-gce instance to be part of the Istio service mesh. The script performs the following:

  • Configures the etc/hosts file with the Istiod IP address to connect back to the Istio control plane in the west cluster.
  • Configures the Envoy agent on the Compute Engine instance.
  • Configures the certificates for mTLS communications.

Installs Docker, then deploys the Service productcatalogservice Docker image on the istio-gce instance.

Deploying Online Boutique

Online Boutique consists of 10 microservices that are written in different programming languages. Earlier in this tutorial, you installed productcatalogservice in the istio-gce instance. In this section, you install the remaining nine microservices in the west cluster in the default namespace.

  1. Label the default namespace for automatic Istio sidecar proxy injection.
kubectl --context west label namespace default istio-injection=enabled

This step ensures that all Pods created in the default namespace have the Envoy sidecar container deployed.

2. Install the Online Boutique app microservices in the west cluster:

kubectl --context west -n default apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml
kubectl --context west -n default apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/istio-manifests.yaml

3. Delete the productcatalogservice Service and Deployment from the west cluster:

kubectl --context west -n default delete svc productcatalogservice
kubectl --context west -n default delete deployment productcatalogservice

4. Wait a few moments, and then ensure that all workloads are up and running:

kubectl --context west get pods

The output is similar to the following:

OUTPUT (DO NOT COPY)
NAME READY STATUS RESTARTS AGE
adservice-86674bf94d-vlkwn 2/2 Running 0 109s
cartservice-9cf968485-srbsq 2/2 Running 2 110s
checkoutservice-74df4f44c8-d7s9b 2/2 Running 0 111s
currencyservice-6444b89474-dwl75 2/2 Running 0 109s
...

This output shows that the productcatalogservice Pod is not running inside the west cluster.

Add the Compute Engine instance to the service mesh

  1. Get the IP address of the Compute Engine instance:
export GCE_IP=$(gcloud --format="value(networkInterfaces[0].networkIP)" compute instances describe ${GCE_INSTANCE_NAME} --zone us-west2-a)

2. Add the Compute Engine instance to the service mesh:

${WORKDIR}/istio-${ISTIO_VERSION}/bin/istioctl experimental add-to-mesh external-service productcatalogservice ${GCE_IP} grpc:3550 -n default

3. This command creates a Kubernetes Service object as well as a ServiceEntry object for productcatalogservice that points to the IP address of the Compute Engine instance.

4. Restart the Istio Service on the Compute Engine instance:

gcloud compute --project $PROJECT_ID ssh --zone us-west2-a ${GCE_INSTANCE_NAME} --command="sudo systemctl stop istio; sudo systemctl start istio;"

Access the Online Boutique application

  1. Get the Istio ingress gateway external IP address for the west cluster:
kubectl --context west get -n istio-system service istio-ingressgateway -o json | jq -r '.status.loadBalancer.ingress[0].ip'

The output is similar to the following:

EXTERNAL_IP

2. Copy the Istio ingress gateway IP address in a web browser. The Online Boutique app main page is displayed.

Online Boutique App Frontend

To confirm that the app is fully functional across the west cluster and the istio-gce instance, browse the products, add products to your cart, and proceed with a checkout. The Service productcatalogservice is running in a dedicated Compute Engine instance while the remaining microservices are running on the GKE cluster.

Monitoring the service mesh

You can use Kiali to monitor and visualize the service mesh. Kiali is a service mesh observability tool that’s installed as part of the Istio installation.

  1. In Cloud Shell, expose the Kiali service on the west cluster:
${WORKDIR}/istio-${ISTIO_VERSION}/bin/istioctl dashboard kiali &

The output is similar to the following:

Failed to open browser; open http://localhost:35755/kiali in your browser.

2. Open the Kiali web interface by navigating to the HTTP link in the preceding output.

3. At the Kiali login prompt, log in with the username admin and password admin.

4. From the menu, select Graph.

5. From the Select a namespace drop-down list, select default.

6. From the Graph drop-down list, select Service graph.

7. (Optional) To see loadgenerator generating traffic to your app, from the Display drop-down list, select Traffic Animation.

The following diagram shows a single Istio service mesh for Services that are spread across a GKE cluster and a Compute Engine instance.

Kiali Service Topology

This diagram shows the following:

  • The microservices running in the west cluster are described by their names only and are denoted by a triangle symbol.
  • The services running in the istio-gce instance are denoted by a domain symbol (a 5 sided polygon).

The Istio service mesh is extended to include applications running on VMs. The services are securely communicating between the GKE cluster and Compute Engine instances using the Istio ingress gateway with mTLS. You can use the steps in this tutorial to add services that run on Compute Engine instances.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Delete the project

The easiest way to eliminate billing is to delete the project you created for the tutorial.

To delete the project:

  1. In the Cloud Platform Console, go to the Projects page.

GO TO THE PROJECTS PAGE

2. In the project list, select the project you want to delete and click Delete.

In the dialog, type the project ID, and then click Shut down to delete the project.

Thank you for going through this tutorial. If you have any comments or feedback about this tutorial, please feel free to leave a note.

--

--

Ameer Abbas
Google Cloud - Community

I do Cloud architecture, Kubernetes, Service Mesh, CI/CD and other cloud native things. Solutions Architect at Google Cloud. Opinions stated here are my own.