Google Auth (OIDC) + Kuberos for kubectl authentication + RBAC

William Broach
5 min readSep 13, 2018

--

Kubernetes + Google Oauth (OIDC)

In this article I will show you how to setup your cluster to use Google’s oauth + Kubernetes RBAC as a means to control user level and role-based access.

This will help ease the process of onboarding new members, giving them cluster access and controlling what resources they will have access to.

NOTE: This is primarily focused on a Kops built cluster however you can adapt this here however you need.

Lets get right into it.

First we need to create a Google project and obtain our client-id and client-secret

  1. Go to: https://console.developers.google.com
  2. Click on New Project and give it a name, then click Create.
  3. On the left sidebar click Credentials then Create Credentials : OAuth Client ID
  4. Choose Web Application.
  5. Add The following Authorized redirect URLs

6. Save theclient-id and client-secret somewhere secure.

Next we need to configure Kubernetes to use RBAC as well as Open ID Connect (OIDC). This is done by passing the --oidc-issuer-url and --oid-client-id parameters to the API server at runtime.

If your using kops you can do this by editing the cluster yaml.

$ kops edit cluster

and make sure authorization is set to rbac: {} and that lines 8–12 exist and where <client-id> is set to the one we obtained in the previous step.

If your using kops + terraform you’ll now need to:

$ kops update cluster --out=. --target=terraform
$ terraform plan
$ terraform apply
$ kops rolling-update cluster --yes

If your using kops without terraform you’ll need to:

$ kops update cluster
$ kops rolling-update cluster --yes

Next were going to setup Kuberos which is going to make the process of generating ~/.kube/config files super easy

First we make the kuberos-secret:

$ kubectl create secret generic -n kube-system kuberos-secret --from-literal=secret=<client-secret>

Where <client-secret> is the one we got in the first section.

Next were going to make a kuberos-config.yml which is a Kubernetes ConfigMap containing a partial ~/.kube/config file that kuberos will use as a template to build new ~/.kube/config ‘s

You’ll need to fill out the following:

  • <cluster_name> = Your kubernetes cluster name
  • <ca_data> = This is the base64 encoded public CA cert for your cluster.

NOTE: You can get <ca_data> by looking at your existing ~/.kube/config under certificate-authority-data or if your using kops you can do the following:

$ aws s3 cp s3://<kops_state_store>/<cluster_name>/pki/issued/ca/<random_numbers>.crt cert.crt
$ cat cert.crt | base64

Create the configmap.

$ kubectl create -f kuberos-config.yml

Next were going to deploy the kuberos-deployment.yml

You’ll need to fill out the following:

  • <client-id> = The client-id we made in the first section
  • <email_domain> = The domain to restrict use of this service from

NOTE: DO NOT LEAVE <email_domain> BLANK. If you do, then any valid gmail address will be able to generate ~/.kube/configs . We don’t want that.

Launch the deployment:

$ kubectl create -f kuberos-deployment.yml

NOTE: This next portion will vary based on your individual infrastructure setup and how you want to go about exposing the https://kuberos.mydomain.com endpoint to your users. The below example is assuming your running a private kubernetes cluster in AWS.

I’ll be showing an example here of one way to expose this service internally to your VPC only using an nginx ingress controller.

First we’ll need to make a Kubernetes service for kuberos. We’ll call it kuberos-svc.yml

Lets create it.

$ kubectl apply -f kuberos-svc.yml

Now we’ll setup our nginx ingress controller and ingress which will terminate SSL and route traffic coming in under https://kuberos.mydomain.com to our kuberos service.

First we need to setup the RBAC for the ingress controller.

$ kubectl create -f nginx-ingress-controller-rbac.yml

Then we need to create a “default backend” which will just serve a basic 404 page for requests it cant route.

$ kubectl create -f default-backend-deployment.yml

And create the Kubernetes service for the default backend.

$ kubectl create -f default-backend-svc.yml

Next we’ll launch the actual nginx ingress controller itself.

$ kubectl create -f nginx-ingress-controller-deployment.yml

And its corresponding Kubernetes service. This will create an internal ELB in your VPC.

$ kubectl create -f nginx-ingress-controller-svc.yml

You will now need to create a Route 53 entry for kuberos.mydomain.com and point it to the DNS name of the ELB that Kubernetes just created.

You can obtain that via:

$ kubectl get svc -n kube-system nginx-ingress-controller

It will look something like internal.XXXXXX.us-xxx-1.elb.amazonaws.com

Next we create a Kubernetes secret that has the SSL cert we will use for our kuberos.mydomain.com domain.

$ kubectl create -f kuberos-tls-secret.yml

Finally we’ll create the ingress. This tells the ingress controller how to route to our kuberos service.

$ kubectl create -f kuberos-ingress.yml

If all went well then when you access https://kuberos.mydomain.com (assuming your VPN’d in as this is routing to an internal ELB) you will be presented with the following page.

The green link at the top left will download kubecfg.yml just rename this and put it in the proper spot ~/.kube/config the account shouldnt have any permissions at all at this point assuming you have RBAC enabled.

To give it permissions you use Role / RoleBindings and ClusterRole / ClusterRoleBindings.

Heres an example of a read only type called “view” (view is a built in default role)

$ kubectl create -f view-clusterrolebinding.yml

Per the docuementation this allows read-only access to see most objects in a namespace. It does not allow viewing roles or rolebindings. It does not allow viewing secrets, since those are escalating.

How you structure your RBAC is entirely up to you and your organization.

Thats it!!!

I hope you found this helpful.

--

--

William Broach

DevOps Janitor | Recovering SysAdmin | Kubernetes | Docker | Distributed Computing | (@while1eq1)