Single Sign-On for Internal Apps in Kubernetes using Google Oauth / SSO

William Broach
5 min readSep 21, 2018

--

BuzzFeed’s S.S. Octopus + Google Oauth + Kubernetes

NOTE: This guide is geared towards a Kubernetes cluster running in AWS. You might have to tweak things to fit your needs.

No one likes maintaining several different accounts for all your internal apps. Single sign-on utilizes just one set of login credentials to access multiple applications.

And with the release of BuzzFeed’s SSO (dubbed S.S Octopus) things just got a whole lot easier. - https://github.com/buzzfeed/sso

When its all said and done, adding a new app behind SSO will be as simple as adding a few lines to a ConfigMap and reloading the proxy.

Lets say you launched 3 new internal apps called: app1, app2 and app3 and you wanted to restrict who could access what via their Google group membership. Giving them all HTTPS / SSO / Permissions would be as simple as updating the upstream-configs ConfigMap and restarting the proxy:

$ kubectl apply -f upstream_configs.yml
$ kubectl delete pod -n sso <sso-proxy_pod_name>

The new sso-proxy pod would come up with the new ConfigMap and those services would be live and accessible via their names:

https://app1.sso.mydomain.com

https://app2.sso.mydomain.com

https://app3.sso.mydomain.com

~ The resulting topology will look like this ~

Lets get right into it.

Prerequisites:

Lets create a namespace for this: $ kubectl create ns sso

In the prereqs you should've saved a json file (lets call it service_account.json) that contains the credentials for the service-account we’ll be using.

We’ll start by making a Kubernetes secret for this.

$ kubectl create secret generic -n sso google-service-account --from-file=service_account.json

And since were already in the business of making secrets, lets make a bunch more. These secrets will be used later on in our deployment yamls.

$ kubectl create secret generic -n sso google-admin-email --from-literal=email=adminemail@mydomain.com

NOTE: adminemail@mydomain.com needs to be a G suite admin on mydomain.com. This is the user that your service-account you created earlier will assume the role of in order to do directory lookup to determine users group membership.

The next 2 will be your Google Project Client-ID and Client-Secret (which you should’ve got in the prereqs step above)

$ kubectl create secret generic -n sso google-client-id --from-literal=client-id=1234567890-xxxxx.apps.googleusercontent.com$ kubectl create secret generic -n sso google-client-secret --from-literal=client-secret=XXXXXX

The rest are going to be S.S. Octopus specific secrets.

$ kubectl create secret generic -n sso proxy-client-id --from-literal=proxy-client-id=$(openssl rand -base64 32 | head -c 32 | base64)$ kubectl create secret generic -n sso proxy-client-secret --from-literal=proxy-client-secret=$(openssl rand -base64 32 | head -c 32 | base64)$ kubectl create secret generic -n sso auth-code-secret --from-literal=auth-code-secret=$(openssl rand -base64 32 | head -c 32 | base64)$ kubectl create secret generic -n sso proxy-auth-code-secret --from-literal=proxy-auth-code-secret=$(openssl rand -base64 32 | head -c 32 | base64)$ kubectl create secret generic -n sso auth-cookie-secret --from-literal=auth-cookie-secret=$(openssl rand -base64 32 | head -c 32 | base64)$ kubectl create secret generic -n sso proxy-cookie-secret --from-literal=proxy-cookie-secret=$(openssl rand -base64 32 | head -c 32 | base64)

Next were going to do the following:

  • Deploy sso-auth and its corresponding service.
  • Deploy sso-proxy and its corresponding service.
  • Deploy an nginx ingress controller and its corresponding service.
  • Create a Route 53 entry to point *.sso.mydomain.com and sso-auth.mydomain.com to the Internal ELB created by the nginx-controller service.
  • Create an ingress to route any incoming requests under *.sso.mydomain.com or sso-auth.mydomain.com to the sso-auth / sso-proxy pods.
  • Deploy a hello-world app and access it.

Lets start by deploying the sso-auth service. Take a look a the following yaml.

You will need to configure the following ENV vars in the above yaml:

  • SSO_EMAIL_DOMAIN : This is for restricting logins to a specific domain, example: mydomain.com
  • HOST : Where the sso-auth service lives, example: sso-auth.mydomain.com
  • REDIRECT_URL : Force SSL redirect for sso-auth service, example: https://sso-auth.mydomain.com
  • PROXY_ROOT_DOMAIN : The root domain, example: mydomain.com
  • VIRTUAL_HOST : The vhost for sso-auth, example: sso-auth.mydomain.com

Create the deployment:

$ kubectl create -f sso-auth-deployment.yml

Next we’ll make a Kubernetes service for sso-auth

Were using just a ClusterIP type service which will be routed to via an Ingress that we will create later.

Create the service:

$ kubectl create -f sso-auth-svc.yml

Next we’ll deploy the sso-proxy service. Lets look at the yaml:

ENV vars of interest are:

  • EMAIL_DOMAIN : Domain to restrict logins to
  • PROVIDER_URL : URL of sso-auth service, example: https://sso-auth.mydomain.com
  • VIRTUAL_HOST : wildcard url to use for any service you want behind SSO. example: *.sso.mydomain.com

Before we deploy the sso-proxy we need to create a ConfigMap that contains the upstream_configs.yml .

Notice the “from” line on line 10. This will need to match whatever you named your VIRTUAL_HOST above in the sso-proxy deployment.

$ kubectl create -f upstream-configs-configmap.yml

Now we can deploy sso-proxy :

$ kubectl create -f sso-proxy-deployment.yml

And deploy its corresponding Kubernetes service:

Again were using a service type of ClusterIP as we will be routing to this via an Ingress (created later).

$ kubectl create -f sso-proxy-svc.yml

Next were going to create an nginx ingress controller and ingress that will terminate SSL and handle routing traffic to the sso-proxy and sso-auth pods.

First we need to setup the RBAC for the ingress controller.

$ kubectl create -f nginx-ingress-controller-rbac.yml

Then we need to create a “default backend” which will just serve a basic 404 page for requests it cant route.

$ kubectl create -f default-backend-deployment.yml

And create the Kubernetes service for the default backend.

$ kubectl create -f default-backend-svc.yml

Next we’ll launch the actual nginx ingress controller itself.

$ kubectl create -f nginx-ingress-controller-deployment.yml

And its corresponding Kubernetes service. This will create an internal ELB in your VPC.

$ kubectl create -f nginx-ingress-controller-svc.yml

Important!, Dont skip

You will now need to create a Route 53 entry for *.sso.mydomain.comand sso-auth.mydomain.com and point it to the DNS name of the ELB that Kubernetes just created for the nginx controller.

It will look something like internal.XXXXXX.us-xxx-1.elb.amazonaws.com

Next we create a Kubernetes secret that has the SSL cert we will use for our *.sso.mydomain.com and sso-auth.mydomain.comdomains.

$ kubectl create -f ssl-cert-secrets.yml

Finally we’ll create the ingress. This tells the ingress controller how to route anything coming in under *.sso.mydomain.com to the SSO proxy.

And lastly, lets deploy a hello-world app.

$ kubectl create -f hello-world-deployment.yml

And its corresponding Kubernetes service

$ kubectl create -f hello-world-svc.yml

If all went well you should be able to visit:

https://hello-world.sso.mydomain.com

And you should be prompted to login via your @mydomain.com email.

as always I hope you found this guide useful!!

Big thanks to BuzzFeed for open sourcing SSO!!

--

--

William Broach

DevOps Janitor | Recovering SysAdmin | Kubernetes | Docker | Distributed Computing | (@while1eq1)