Single Sign-On for Kubernetes

Make use of Azure Active Directory as your Identity Provider and its Security Groups to filter access to your Kubernetes clusters (dashboard and CLI).

Guillaume Dos Santos
ForePaaS
7 min readJun 1, 2020

--

Here at ForePaaS, we’ve wandered in the dark for quite a while to figure out how to make use of Azure Active Directory (AAD) with our Security Groups (SGs) to access our development Kubernetes (K8S) clusters dashboard while also being authenticated by Azure when using kubectl commands from outside the clusters. Eventually, we found a good working setup that we will be sharing with you.

Requirements

The SSO experience we wanted was driven by the following needs:

  • Use AAD as Identity Provider (IdP)
  • Use AAD Security Groups to filter access using 1 SG per K8S cluster
  • Separate the use of kubectl CLI from the access to the dashboard using different SGs
  • Use a custom URL to access our K8S clusters Dashboard
  • Seamless use of kubectl from Mac machines to use CLI commands on any K8S cluster

Prerequisites

  • AAD subscription with P1 licenses (mandatory for admins to create the non-gallery apps we will need and for users to be able to use them)
  • An ingress-controller, in this example we are using Nginx-ingress-controller in K8S clusters (enable use of custom URLs along with access to exposed dashboard and apiserver for kubectl)
  • Pusher’s OAuth2_Proxy for K8S (a reverse proxy that provides authentication with many IdPs like Azure)
  • DNS records for the dashboard custom URL
  • K8S dashboard
  • TLS certificate for your custom dashboard URL if signed by a well-known public Certificate Authority (CA)
  • TLS certificate for your custom dashboard URL + CA certificate if signed by on-premise CA

I. Configuring SSO for the K8S Dashboard

In order to allow the Dashboard to delegate authentication, we need to configure an Azure app, deploy the OAuth2_Proxy into the K8S cluster and instruct it how to pass Dashboard authentications requests to Azure.

a. Azure WebApp

1- On Azure portal, go to App registrations and fill in the Name and Redirect URI in the form of https://chosen_dashboard_fqdn/oauth2/callback

Azure — Step 1: Register WebApp

2- Create a New client secret for it. Choose expiration date.

Note: if you use Ansible to deploy, double-check that the secret does not contain colon chars.

Azure — Step 2: Create Client secret

3- As an admin, grant consent on behalf of your organization when OAuth2_Proxy is trying to connect to the Azure webapp. To do so, on API permissions, Add a permission and click Grant admin consent.

Azure — Step 3: Configure API permissions

4- Move to the Application itself (not its registration anymore)
In the Properties of the webapp, select Yes for User assignment required:

Azure — Step 4: Make User assignment mandatory

5- In Users and groups, add the right AAD Security Group (create it and manage members if not already done)

Azure — Step 5: Manage access to groups

b. Kubernetes manifests

1- First, we are going to deploy Pusher’s OAuth2_Proxy. Copy the manifest below and edit the following parameters to fit your needs.

  • Choose the correct apiVersion according to your version of K8S
  • Generate a kubedash_cookie_secret with
  • Fill in the Azure webapp ID as client-id, the secret you generated on the UI as client-secret and the redirect URI as redirect URL.
  • Finally, set your company email-domain to filter authentication to users with an email on your domain.

Note: Since we work on multiple K8S versions, we had to use the if statement above in ansible-playbooks because of api deprecations in kube 1.16.
Just use the right apiVersion for you K8S’ cluster version

2- If your TLS certificate for your custom dashboard URL is signed by a well-known public CA (Comodo, Symantec, GoDaddy…) just skip this step.
If it is signed by an on-premise CA, you will need to create a ConfigMap for your CA certificate so does the K8S cluster consider it as a legit CA and mount it in OAuth2_Proxy Pod:

  • Add those lines on your OAuth2_Proxy Deployment manifest (under protocol: TCP)
  • Create the ConfigMap
  • Apply the ConfigMap manifest

3- Apply OAuth2_Proxy’s Deployment manifest:

4- Create and apply OAuth2_Proxy’s Service manifest

5- Create a K8S Secret with the TLS certificate for your custom Dashboard’s URL domain:

6- Edit and apply the Dashboard’s Ingress manifest:

Note: Since we work with multiple versions of nginx-ingress, we had to use the if statement above in ansible-playbooks because of api deprecations in kube 1.16. Just use the right annotation for you nginx-ingress’ version

7- Edit and apply Oauth2_Proxy’s Ingress manifest:

8- To access the Dashboard, go to https://{{ kubedash_FQDN }} and authenticate using your Azure Active Directory credentials. Then, skip the login window to gain authorization with the default Service Account that has full admin permissions.

Note: To be able to skip the login window you need the following container arg in the dashboard deployment manifest:

Note: If you want to have different set of permissions for each Azure Security Group, you will need to use RBAC Roles.

Voilà! Anyone that is a member of the right Security Group is now able to connect to the K8S Dashboard using their AAD credentials!

II. Configuring kubectl SSO

We’re going to enable the Kubernetes API server to talk to the Azure webapp so that it can leverage Azure’s authentication mechanisms to grant us access to kubectl commands from outside the cluster. To this extent, we will need another Azure app to talk to the previous one.

Note: some may want to use the same app for both uses (Dashboard and kubectl). This is impossible since the needed allowPublicClient manifest field (see below on step 6) disables the ability to filter members in ‘Users and Groups’ (step 5 above in part I) which breaks the Requirement to be able to filter members (using SGs)

a. Azure WebApp

Go to the app registration page.

1- Expose webapp API to the Kubernetes cluster’s API server:

Azure — Step 6: Expose WebApp API 1/2
Azure — Step 7: Expose WebApp API 2/2

2- Now create a client/native app by clicking New registration.
Fill the Redirect URI in the following form :

Azure — Step 8: Register ClientApp

3- Add any other masters IP that you might have as Redirect URIs

Azure — Step 9: Add necessary redirect URIs

4- Link the client/native app to the webapp by granting API permission (click Add a permission and search for the previously created one):

Azure — Step 10: Add necessary redirect URIs

5- Wait for Azure to prepare for consent and then click on Grant Admin consent. Then wait until Azure actually acknowledges it by showing the green badge in the Status column.

Azure — Step 11: Configure API permissions

6- Edit client/native app Manifest by putting the following fields to “true” and Save:

b. Kubernetes manifests

1- In the K8S API server’s manifest add:

Note: the client ID must be the ID of the webapp, NOT the client/native app.
Also, don’t forget the ‘/’ at the end of the issuer URL.

2- Wait for the API server to reload (it can sometimes take a few minutes). If it still does not answer, check the Master1 API server pod’s logs from another Master to investigate what could have gone wrong.

c. Kubectl config

1- Copy your kube-CA certificate to a file in your computer (and download and install the kubectl binary if not done yet).
You can either put the CA cert in your keychain, or in a folder of your choice but then you need to specify the path in certificate-authority below.

2- Proceed to configure kubectl on your computer following this guide:

3- Test a kubectl command to check if everything’s OK:

That’s it! After providing the code to the URL (both displayed in the terminal), you should see the result of the previous command.

Today we saw how to use Azure AD authentication capabilities to leverage Single Sign On for Kubernetes dashboards and kubectl use.
It has not been that easy to bring everything together since we had specific needs but in the end, we always figure it out!

In a coming post, we’ll explore how to extend the group mapping with RBAC so that we use specific Roles instead of the default Service Account and also how to reproduce the setup with Kong-ingress controller!

https://github.com/kubernetes/kubectl/blob/master/images/kubectl-logo-medium.png

--

--