GKE & IPv6

Esteban Bouza
4 min readSep 19, 2023

--

As IPv6 becomes more popular, Google Kubernetes Engine (GKE) now allows users to create clusters that support IPv6 for both their clusters and workloads. This provides a more robust and scalable network infrastructure.

This article describes creating an IPv6 GKE cluster with private nodes, and some interesting aspects of running IPv6 workloads in GKE.

The example creates a GKE cluster with IPv6 with gcloud commands as an example, but Terraform support through the regular Google provider exists for any of the examples used in this article.

export PROJECT_ID="<replace-with-project-id>"
export VPC_NAME="my-dual-stack-vpc"
export SUBNET_NAME="gke-dual-stack-subnet"
export REGION="us-central1"
export ZONE="us-central1-a"
export CLUSTER_NAME="gke-dual-stack-cluster"

VPC

The first requirement to have a IPv6 support in GKE is having a VPC configured with a dual stack network. This will assign both an IPv4 and an IPv6 to your resources in the VPC.

gcloud compute networks create "$VPC_NAME" \
--project=$PROJECT_ID \
--subnet-mode=custom \
--enable-ula-internal-ipv6

gcloud compute networks subnets create "$SUBNET_NAME" \
--project=$PROJECT_ID \
--network=$VPC_NAME \
--region=$REGION \
--range=10.0.0.0/24 \
--stack-type=IPV4_IPV6 \
--ipv6-access-type=INTERNAL

gcloud compute routers create "router-nat-${SUBNET_NAME}" \
--network=$VPC_NAME \
--region=$REGION

gcloud compute routers nats create "nat-${SUBNET_NAME}" \
--region=$REGION \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--router="router-nat-${SUBNET_NAME}"

Once created, your environment will look as follows:

Dual stack VPC and subnetwork

ULA

In the example used, the VPC will get automatically assigned an IPv6 range for the whole VPC and its subnets.

In our setup, fd20:690:89aa::/48 is the designated range for private IP addresses within the VPC. The specific subnet we’re using is fd20:690:89aa::/64.

GKE Setup — Minimum Requirements

In order to create a GKE cluster with support for IPv6, you need to meet at least the following requirements:

Creating a cluster

Now that we have a VPC, let’s create a private GKE cluster. We will specify the use of dataplane v2 and a stack type of IPv4-IPv6.

gcloud container clusters create $CLUSTER_NAME \
--project=$PROJECT_ID \
--release-channel=stable \
--region=$REGION \
--network=$VPC_NAME \
--subnetwork=$SUBNET_NAME \
--enable-ip-alias \
--enable-private-nodes \
--enable-master-authorized-networks \
--master-authorized-networks=0.0.0.0/0 \
--master-ipv4-cidr=172.16.0.32/28 \
--stack-type=ipv4-ipv6 \
--enable-dataplane-v2

After a few minutes, the cluster will be created.

Dual stack GKE cluster
Dual stack GKE Cluster

Connect to your cluster

Let’s connect to the cluster to run some kubectl commands on it.

gcloud container clusters get-credentials $CLUSTER_NAME \
- region $REGION \
- project $PROJECT_ID

Deploy a sample workload and connect to it

Try deploying these two sample applications so we can connect from one to another.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app-ping
spec:
replicas: 1
selector:
matchLabels:
app: sample-app-ping
template:
metadata:
labels:
app: sample-app-ping
spec:
containers:
- name: sample-app-ping
image: nicolaka/netshoot:latest
command: ["sleep", "infinity"]
EOF

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app-pong
spec:
replicas: 1
selector:
matchLabels:
app: sample-app-pong
template:
metadata:
labels:
app: sample-app-pong
spec:
containers:
- name: sample-app-pong
image: nicolaka/netshoot:latest
command: ["sleep", "infinity"]
EOF

Wait until your sample application is deployed and connect to it:

kubectl exec -it deployments/sample-app-ping -- bash

View the IP addresses associated with the pod:

sample-app-88b6bf855-kxzvh:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default
link/ether 2e:1d:3b:44:51:3a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.108.8.3/24 brd 10.108.8.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd20:690:89aa::8:0:3/112 scope global
valid_lft forever preferred_lft forever
inet6 fe80::2c1d:3bff:fe44:513a/64 scope link
valid_lft forever preferred_lft forever

Connect to your -pong service and ping the other one via IPv6. In this case we will use the IPv6 address that was assigned in the previous step.

kubectl exec -it deployments/sample-app-pong -- bash
sample-app-pong-85bf8fb845-frpxk:~# ping fd20:690:89aa::20:0:2
PING fd20:690:89aa::20:0:2(fd20:690:89aa::20:0:2) 56 data bytes
64 bytes from fd20:690:89aa::20:0:2: icmp_seq=1 ttl=62 time=2.53 ms
64 bytes from fd20:690:89aa::20:0:2: icmp_seq=2 ttl=62 time=0.555 ms

At this point you could also reach the pods from other resources in your VPC, like a VM, by using the IPv6 endpoint.

Interestingly, note that the IP addresses for both nodes and pods are derived from a unified IPv6 range. For instance, a node might have the address fd20:690:89aa::8:0:1, while the pods are assigned fd20:690:89aa::8:0:2 and fd20:690:89aa::8:0:3.

As of today you may encounter some limitations using GKE & IPv6, like not being able to use Private Google Access IPv6 from IPv6 clusters yet, but the improvements are rapidly coming.

On the upsides, this will allow you to start a smoother transition to IPv6 and bring other niceties along the way, like the fact that there’s no need to implement SNAT on your pods.

Cleanup

Once completed, remember deleting the resources created in this article to avoid incurring in extra costs.

gcloud container clusters delete $CLUSTER_NAME \
--region=$REGION \
--quiet

gcloud compute networks subnets delete $SUBNET_NAME \
--region=$REGION \
--quiet

gcloud compute routers delete "router-nat-${SUBNET_NAME}" \
--region=$REGION \
--quiet

gcloud compute routers nats delete "nat-${SUBNET_NAME}" \
--region=$REGION \
--quiet

gcloud compute networks delete $VPC_NAME

--

--

Esteban Bouza
0 Followers

Building in the cloud @ Google Cloud