Single Sign-On for Internal Apps in Kubernetes using Google Oauth / SSO

BuzzFeed’s S.S. Octopus + Google Oauth + Kubernetes

NOTE: This guide is geared towards a Kubernetes cluster running in AWS. You might have to tweak things to fit your needs.

No one likes maintaining several different accounts for all your internal apps. Single sign-on utilizes just one set of login credentials to access multiple applications.

And with the release of BuzzFeed’s SSO (dubbed S.S Octopus) things just got a whole lot easier. -

When its all said and done, adding a new app behind SSO will be as simple as adding a few lines to a ConfigMap and reloading the proxy.

Lets say you launched 3 new internal apps called: app1, app2 and app3 and you wanted to restrict who could access what via their Google group membership. Giving them all HTTPS / SSO / Permissions would be as simple as updating the upstream-configs ConfigMap and restarting the proxy:

The new sso-proxy pod would come up with the new ConfigMap and those services would be live and accessible via their names:

~ The resulting topology will look like this ~

Lets get right into it.


Lets create a namespace for this: $ kubectl create ns sso

In the prereqs you should've saved a json file (lets call it service_account.json) that contains the credentials for the service-account we’ll be using.

We’ll start by making a Kubernetes secret for this.

And since were already in the business of making secrets, lets make a bunch more. These secrets will be used later on in our deployment yamls.

NOTE: needs to be a G suite admin on This is the user that your service-account you created earlier will assume the role of in order to do directory lookup to determine users group membership.

The next 2 will be your Google Project Client-ID and Client-Secret (which you should’ve got in the prereqs step above)

The rest are going to be S.S. Octopus specific secrets.

Next were going to do the following:

  • Deploy sso-auth and its corresponding service.
  • Deploy sso-proxy and its corresponding service.
  • Deploy an nginx ingress controller and its corresponding service.
  • Create a Route 53 entry to point * and to the Internal ELB created by the nginx-controller service.
  • Create an ingress to route any incoming requests under * or to the sso-auth / sso-proxy pods.
  • Deploy a hello-world app and access it.

Lets start by deploying the sso-auth service. Take a look a the following yaml.

You will need to configure the following ENV vars in the above yaml:

  • SSO_EMAIL_DOMAIN : This is for restricting logins to a specific domain, example:
  • HOST : Where the sso-auth service lives, example:
  • REDIRECT_URL : Force SSL redirect for sso-auth service, example:
  • PROXY_ROOT_DOMAIN : The root domain, example:
  • VIRTUAL_HOST : The vhost for sso-auth, example:

Create the deployment:

Next we’ll make a Kubernetes service for sso-auth

Were using just a ClusterIP type service which will be routed to via an Ingress that we will create later.

Create the service:

Next we’ll deploy the sso-proxy service. Lets look at the yaml:

ENV vars of interest are:

  • EMAIL_DOMAIN : Domain to restrict logins to
  • PROVIDER_URL : URL of sso-auth service, example:
  • VIRTUAL_HOST : wildcard url to use for any service you want behind SSO. example: *

Before we deploy the sso-proxy we need to create a ConfigMap that contains the upstream_configs.yml .

Notice the “from” line on line 10. This will need to match whatever you named your VIRTUAL_HOST above in the sso-proxy deployment.

Now we can deploy sso-proxy :

And deploy its corresponding Kubernetes service:

Again were using a service type of ClusterIP as we will be routing to this via an Ingress (created later).

Next were going to create an nginx ingress controller and ingress that will terminate SSL and handle routing traffic to the sso-proxy and sso-auth pods.

First we need to setup the RBAC for the ingress controller.

Then we need to create a “default backend” which will just serve a basic 404 page for requests it cant route.

And create the Kubernetes service for the default backend.

Next we’ll launch the actual nginx ingress controller itself.

And its corresponding Kubernetes service. This will create an internal ELB in your VPC.

Important!, Dont skip

You will now need to create a Route 53 entry for *.sso.mydomain.comand and point it to the DNS name of the ELB that Kubernetes just created for the nginx controller.

It will look something like

Next we create a Kubernetes secret that has the SSL cert we will use for our * and sso-auth.mydomain.comdomains.

Finally we’ll create the ingress. This tells the ingress controller how to route anything coming in under * to the SSO proxy.

And lastly, lets deploy a hello-world app.

And its corresponding Kubernetes service

If all went well you should be able to visit:

And you should be prompted to login via your email.

as always I hope you found this guide useful!!

Big thanks to BuzzFeed for open sourcing SSO!!

DevOps Janitor | Recovering SysAdmin | Kubernetes | Docker | Distributed Computing | (@while1eq1)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store