Accessing GKE private clusters through IAP

Luca Prete
Google Cloud - Community
5 min readOct 26, 2021

TL;DR

The article shows how to connect to the control plane of a GKE private cluster, leveraging a proxy and an IAP tunnel.

Related Work and Motivations

While writing this article, I’ve found some interesting work, which I’d like you to consider as complementary to this one, and which is definitely worth to review, before deciding the path that might work for you.

The official Google Cloud documentation offers a tutorial for accessing private clusters, leveraging proxies directly installed on GKE clusters.
While this approach is more lightweight than the one described in this article, I’ve noticed over time that it doesn’t work for everybody, as different users prefer to keep the proxy separate from their cluster, so they don’t risk to loose access to it if the cluster itself has issues.

More surprisingly, almost at the end of my writing, I discovered an older article by Peter Hrvola proposing the same approach I describe. While I made sure to align the syntax of the commands I used through the article to avoid generating confusion (basically, modifying the order of parameters), I felt that giving a similar view around the same solution might have still added value for the reader. I’ve then decided to mention Peter’s article here and move forward with the publication.

The example provided below is correlated by some working Terraform code, which can be found in the Cloud Foundation Fabric repository. The sample code allows to create a shared VPC, a GKE cluster and the bastion host with TinyProxy installed.

GKE Private Clusters

GKE Private Clusters are becoming more and more relevant, especially when it comes to production deployments.

While private clusters are a great resource to provide stronger segregation of both master and worker nodes from the Internet, their configuration can sometimes look more complex and confusing. This includes how administrators connect from their clients to the control plane, in order to interact with the Kubernetes APIs.

Private Clusters leverage VPC peering in order to connect the control plane and the workers. As such, master nodes are accessible through private IP addresses, only from the VPC where the workers live.

A typical GKE cluster deployment

It’s important to remember that VPC peerings are non-transitive connections. Moreover, the path from on-premises clients to the masters could still be long and complex, thus requiring quite non trivial wirings (routes, NAT rules, firewall rules and more).
How can users connect to the control plane, privately and securely, without going through a convoluted network setup?

Identity Aware Proxy (IAP) Tunnels

Identity Aware Proxy (IAP) tunnels allow users to forward TCP traffic from outside Google to their GCE instances, without directly exposing them on the Internet.
Instead of connecting directly to the end machines, users call a public Google Cloud API via gcloud or through the console, that creates a WebSocket tunnel to encapsulate private traffic. Google authorizes the request and acts as a bridge between the users and their VMs.

Just as it happens for plain SSH, IAP tunnels can also be used to establish connections to other hosts, “jumping” through the first machine users connect to. Let’s see an example:

Administrators connect to the destination VM, leveraging an IAP tunnel passing through the jump-vm. Both machines are configured with private IPs only.

In this example, the client connects from on-premises to the destination VM, leveraging the IAP tunnel created through the jump host machine. This can be easily achieved, typing few commands on the client machine:

# Create the SSH tunnel
gcloud compute ssh jump-vm \
--project my-test-project \
--zone europe-west1-b \
-- -L 2222:192.168.0.200:22 -N -q -f
# Connect to the destination VM
ssh 127.0.0.1 -p 2222

It’s important to note that for this example to work, two ingress firewall rules should be set in the VPC:

Solution

Let’s start putting a few concepts together:

  • Kubernetes clients connect to the Kubernetes APIs through HTTPS. This means connections can be proxied!
  • In order to establish HTTPS connectivity on port TCP 443, we can leverage IAP tunnels, described above

This leads us to the solution:

Administrators connect to the GKE control plane, leveraging an IAP tunnel passing through the jump-vm.

In this example, we’ll use TinyProxy, which comes for free with most of Linux distributions.

First, let’s deploy a VM connected to the same VPC where the workers live. Once installed, let’s setup TinyProxy

apt update
apt install -y tinyproxy
# Edit the /etc/tinyproxy/tinyproxy.conf adding this line
Allow localhost
service tinyproxy restart

This can also be automated through an idempotent startup-script:

#! /bin/bash
apt-get update
apt-get install -y tinyproxy
grep -qxF ‘Allow localhost’ /etc/tinyproxy/tinyproxy.conf || echo ‘Allow localhost’ >> /etc/tinyproxy/tinyproxy.conf
service tinyproxy restart

In GCP, let’s add a firewall rule to allow connections to the proxy machine from the 35.235.240.0/20 IAP range.

We can now move to the on-premises client machine, where gcloud and kubectl are installed.

First, let’s download the Kubernetes cluster configuration:

gcloud container clusters get-credentials my-test-cluster \
--zone europe-west1-b \
--project my-test-project \
--internal-ip

Create a tunnel, leveraging IAP (by default, TinyProxy listens on port 8888):

gcloud compute ssh my-bastion-vm \
--project my-test-project \
--zone europe-west1-b \
-- -L 8888:localhost:8888 -N -q -f

We can now run kubectl leveraging the proxy and access the Kubernetes APIs:

HTTPS_PROXY=localhost:8888 kubectl get pods
NAME READY STATUS RESTARTS AGE
demo 1/1 Running 0 19h
nginx 1/1 Running 0 24h

The HTTPS_PROXY variable can also be exported (export HTTPS_PROXY=…), so it doesn’t have to be input every time before running kubectl. Anyway, this would make all connections go through the proxy, including the ones not destined to kubectl, until the variable isn’t unset.
It’s up to the user to choose what of the two forms work best for their needs.

Followup

In the next article we’ll see how to -similarly- leverage proxies, in order to connect to the control plane of GKE private clusters from other VPCs.

Thank you Ludovico Magnocavallo for helping me reviewing this article!

--

--

Luca Prete
Google Cloud - Community

Strategic Cloud Engineer (former Cloud Consultant) at Google. Deployment engineer, DevOps. Working on systems and networks. SDN specialist.