Secure Your Kubernetes Cluster with Google OIDC

Benji Visser
6 min readApr 7, 2018

--

This tutorial will show you how to setup your Kubernetes cluster so that it can be accessed via kubectl and Kubernetes Dashboard with Google OIDC. This will allow your developers to simply login to a Kubernetes cluster with their @domain Google email address. I wrote this because I haven’t seen another guide that shows how to authZ/authN users in Kubernetes for Dashboard and kubectl, which would prevent you from doing proper audit logging.

By the end of this tutorial, you’ll have setup two domains

  1. kuber.example.org where you can access your Kubernetes dashboard using Google OIDC

2) kuberos.example.org where you can easily get OIDC tokens to use when accessing your cluster using kubectl.

NOTE: This tutorial was written for my purpose and toolset of AWS, kops and Google OIDC, but I hope insights can be gleaned for other toolsets or OIDC providers.

Versions

  • kubectl: 1.8.6
  • Kubernetes: 1.8.6 w/ RBAC enabled
  • Kops: 1.8

The RBAC part is important, since we need to authorize users against our RBAC configuration.

Tools

We’re using a few tools I found around the web to set this all up easily.

Step 1 — Setup a OIDC application in Google

We need to create an OIDC application Google to use when authenticating our users.

a) Navigate to Google API Credentials dashboard

b) Create credentials > OAuth client ID > Web Application

c) Name it Kubernetes and add the callbacks URLS

But change example.org for the domain you own.

Step 2 — Configure the Kubernetes API Server

We need to configure the Kubernetes API Server with OIDC client information so that it can validate user claims with Google.

If you’re using kops, you can do a kops edit cluster and add the following.

kubeAPIServer:
oidcClientID: REDACTED.apps.googleusercontent.com
oidcIssuerURL: "https://accounts.google.com"
oidcUsernameClaim: email

Then a kops update and kops rolling-update to update your cluster. This will take your master node down for a couple minutes.

If you’re not rolling kops, the API flags are very similar to the ones in kops config above.

Step 3 — Create TLS Certs using LetsEncrypt

We would like our dashboard to be exposed using TLS, so we can generate some TLS certs for the subdomain that our Kubernetes dashboard will be exposed on (but authenticated) using Let’s Encrypt (LE).

The very simple CLI tool I’m using to generate LE certs is located here.
It assumes you have your AWS credentials in your ENV and that the user associated with those credentials is able to manage route53. It interfaces with LE to generate certs.

./acme.sh — issue -d "kuber.example.org" — dns dns_aws — keylength ec-256

Nice! If all goes well the output should look like this

.acme.sh output

We’ll use this TLS cert when we setup our Nginx Ingress.

Step 4 — Create Nginx Ingress

Next, we need to create an Nginx Ingress so that we can easily serve our dashboard from a subdomain.

The kubectl command below will just setup the base Nginx Ingress service.

We’ll create the actual Ingress rules for our services in steps Step 6 and Step 7.

Nginx Ingress Basics

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml

Nginx Ingress RBAC

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml

Nginx Ingress AWS ELB

kubectl patch deployment -n ingress-nginx nginx-ingress-controller — type=’json’ \
— patch=”$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/publish-service-patch.yaml)"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml

Step 5 — Setup your route53 for new ELB

Kops should have setup everything you needed as far as Kubernetes API server DNS records, but we need to create an additional record that will wildcard match all subdomains to route them through our Nginx Ingress so that we can access kuber.example.org and kuberos.example.org.

Step 6 — Setup Proxy for Dashboard AuthN

Here comes the fun part! We’re going to deploy a OIDC proxy that will authenticate our users against Google and then redirect them to the Kubernetes dashboard with a header that authenticates them (Authorization : Bearer xxxx). You can read more about accessing Kubernetes via OIDC tokens here.

For this step, we need to clone the repo that contains the oidc-proxy and install kontemplate.

Brew users can install kontemplate like this.

brew tap tazjin/kontemplate https://github.com/tazjin/kontemplate
brew install kontemplate

We will need to configure the file cluster.yaml with the appropriate values so that we can template our Kubernetes manifests: Some TLS certs for your ingress, some OIDC configuration for the OIDC proxy, and a developer/admin list for access control to cluster resources.

The two RBAC roles we have:
- admin: Full control of all cluster resources.
- developers: Full control of all resources in namespace default except for secrets.

This is a fairly broad RBAC for demonstration that can and should be tuned for your needs.

To setup the OIDC proxy dashboard, we’ll first need to create our dashboard secrets with OIDC information.

kubectl create secret \
-n kube-system \
generic \
kube-dashboard-secrets \
— from-literal=client_id=REDACTED.apps.googleusercontent.com \
— from-literal=client_secret=REDACTED \
— from-literal=session=enGCuITaBPHQtpZSxhcivw==

Then install the TLS cert we created with `.acme.sh`.

kubectl create secret tls kuberos-tls-secret 
\— key ‘/Users/noqcks/.acme.sh/*.example.org_ecc/*.example.org.key’ \— cert ‘/Users/noqcks/.acme.sh/*.example.org_ecc/*.example.org.cer’
\-n kube-system

NOTE: Make sure to append your intermediate certs (ca.cer) to your root certs (*.example.org.cer).

We can use kontemplate to see the manifest output in YAML.

kontemplate template cluster.yaml -i oidc-proxy-dashboard

Once it’s to your liking, you can deploy to your cluster.

kontemplate apply cluster.yaml -i oidc-proxy-dashboard

NOTE: Ensure you fill out context in the clusters.yaml so that your deployment goes to the correct cluster.

You should now be able to access your Kubernetes dashboard at https://kuber.example.org!

Step 7 — Setup Proxy for Kubectl AuthN

So, we’ve setup our Kubernetes dashboard, but unfortunately our developers have no easy way to authenticate with the Kubernetes cluster via kubectl! So, we’ll setup a little web service called Kuberos that will authenticate them with Google and hand them an OIDC token to use on the command line.

We need to create a secret with your OIDC client secret to be used by kuberos.

kubectl create secret \
-n kube-system \
generic \
kuberos-secret \
— from-literal=secret={{ OIDC client secret here }}

Then we can deploy Kuberos to Kubernetes.

kontemplate apply cluster.yaml -i kuberos

The service should be running at https://kuberos.example.org.

If we access it and login with our Google account, it will generate us an id_token to use for authenticating with our cluster from kubectl. We can authenticate manually with Kuberos to get authentication details for our Kubernetes cluster.

The kuberos web service.

We can try access our cluster using our user profile and notice that our RBAC allows us to access pods but not secrets!

Hooray! It’s been a real effort, thank you for joining me in the journey!

PS. The coolest part about this is that we’re now able to do some real audit
logging in Kubernetes. This is the audit log generated for my OIDC user listing PVCs in default namespace.

Mar 21 17:47:07 ip-172–20–58–238 kube-apiserver-audit.log Metadata
{
“kind”:”Event”,
“apiVersion”:”audit.k8s.io/v1beta1",
“metadata”:{
“creationTimestamp”:”2018–03–21T21:47:07Z”
},
“level”:”Metadata”,
“timestamp”:”2018–03–21T21:47:07Z”,
“auditID”:”20ac14d3–1214–42b8-af3c-31454f6d7dfb”,
“stage”:”RequestReceived”,
“requestURI”:”/api/v1/namespaces/default/persistentvolumeclaims”,
“verb”:”list”,
“user”:{
“username”:”benjamin.visser@example.org”,
“groups”:[
“system:authenticated”
]
},
“sourceIPs”:[
“172.20.66.233”
],
“objectRef”:{
“resource”:”persistentvolumeclaims”,
“namespace”:”default”,
“apiVersion”:”v1"
},
“requestReceivedTimestamp”:”2018–03–21T21:47:07.603214Z”,
“stageTimestamp”:”2018–03–21T21:47:07.603214Z”
}

--

--

Benji Visser

I talk about Kubernetes / Deep Learning / DevOps Lead @AdaSupport