GKE private cluster with a bastion host

Peter Hrvola
Google Cloud - Community
5 min readJan 6, 2021

--

Many GKE clusters are left exposed for management access from the internet. Public access to the control plane leaves the cluster exposed to various potential attacks and zero day exploits. It’s industry best practice to rely on layered security where attackers have to bypass controls on multiple levels before gaining access.

In this article you’ll learn:

  • How public clusters are exposed to attackers
  • Set-up private cluster with additional access controls
  • Create a bastion host with Identity aware proxy (IAP) for a secure access

Google Kubernetes Engine (GKE) offers three modes of operation to choose from during the set-up:

  • Public endpoint access disabled — GKE management APIs cannot be accessed directly over the internet
  • Public endpoint access enabled with authorized networks — management APIs are accessible over the internet but require the request to come from a specific set of IP addresses
  • Public endpoint access enabled — This is the most unsecure option and allows direct access to management APIs relying only on Kubernetes authentication features

Structuring security around assets as the onion model allows for the enforcement of security rules on multiple levels. From physical access, through the network all the way to authentication on the application level. Establishing controls on multiple levels of the stack allows to significantly reduce the risk of an attacker finding vulnerabilities that would lead to exposing the core assets.

Onion security model

Making GKE cluster private adds a layer of security to your workload where an attacker has to additionally bypass network security rules on top of Kubernetes authentication.

Kubernetes \w Public endpoint access enabled

Management of Kubernetes cluster with Public endpoint access enabled

Kubernetes clusters with public access to management APIs are normally accessed by the administrator directly from the internet where Kubernetes’ built-in authentication is used to verify the identity of the administrator and their access permissions.

Kubernetes authentication is the only level of protection against attackers. This leaves cluster susceptible to zero-day attacks where newly discovered vulnerability can be exploited while no fix is available. Adding multiple layers of security requires attackers to find multiple zero-day/unpatched vulnerabilities before gaining access to core assets.

Setting-up private GKE cluster

Management of Kubernetes cluster with Public endpoint access disabled

First, we will create a new GKE cluster with no public access allowed. Once we have a cluster up and running we will need a way to access it since public access is no longer an option we will set-up a bastion host that sits between Kubernetes management APIs and allows access to manage the cluster. And we will use GCP IAM for Identity aware proxy to establish a secure connection to our bastion host.

Setting-up network for private GKE

Let’s start by creating a private cluster in the console. In the networking settings, we choose a private cluster. Disable access using external IP and we have to specify the Master IP range that’s going to be for allocating IP addresses to Kubernetes master nodes. The range must not overlap with any other IP address in the VPC and has to have a CIDR size of /28. In this case, we will use CIDR 172.16.0.0/28.

Once the GKE cluster is up and running we will need a way of managing our cluster. For that, we deploy bastion host on cluster internal network which is authorized to access management APIs.

We will start by creating a small Compute engine instance e.g “e2-micro” is sufficient. The instance has to run on the same network as our Kubernetes cluster created in the previous step. In this case, it’s the default network and default subnet. Once the instance is running let’s ssh into the instance (we can use the GCP console) and install Tinyproxy by running “sudo apt-get install tinyproxy” once this is done we change the configuration of Tinyproxy in /etc/tinyproxy/tinyproxy.conf and append “Allow localhost” to the end of the file and save the file. Once the file is saved we will reload Tinyproxy by executing “sudo service tinyproxy restart”

Installing tiny proxy on a compute engine

The last step is to connect to your Kubernetes cluster from your computer using the usual tools such as kubectl. This step is done on your computer where we first download credentials for the GKE cluster using:

Set-up Kubernetes authentication configuration

This sets-up local Kubernetes config including cluster credentials that can be used to access the cluster by kubectl and similar tools. Lastly, we create an identity aware proxy to the bastion using:

Create HTTPS proxy to bastion instance

This creates a proxy to our bastion that we created earlier. The role of the bastion is to relay requests send to it to the request destination in our case to the Kubernetes management APIs. The command is running proxy as a background process and it has to be running when we want to communicate with our cluster.

The last step is to specify proxy using an HTTPS_PROXY environment variable to kubectl. This variable can be used to proxy requests to the cluster also with other tools like Jenkins X’s jx-cli or Octant. But its support depends on the application.

Now we have a cluster with layered security fully running with a way to access it. Furthermore, we shouldn’t forget about patch management and cluster and bastion set-up should be automated and regularly updated with tools such as Terraform.

Stay tuned for posts for improving your clusters!

--

--