Deploying K10 with Red Hat OpenShift OAuth proxy

Image for post
Image for post

At Kasten, our mission is to dramatically simplify operational management of stateful cloud-native applications. Kasten’s K10, our enterprise-grade data management platform for Kubernetes backup and DR delivers on this mission by helping our customers protect their cloud-native applications against accidental or malicious data loss.

As part of this mission to protect applications, we take security very seriously. In today’s multi-tenant Kubernetes clusters, security is critical for safety. This is why at Kasten we have built multiple ways of authenticating a user so that fine-grained role-based access control (RBAC) can be used with it. While we will cover our RBAC support in a later post, this article will list some of these authentication methods available in K10 with a focus on Red Hat OpenShift’s OAuth proxy.

Image for post
Image for post

Methods of Authentication

Basic authentication

This mode allows users to authenticate using a username and password. If users want to evaluate the product or when we want to do POCs, this is a quick way to get a basic form of authentication setup with K10 but is not recommended for production usage.

Token based authentication

In this mode, the user is presented with a login screen where they can enter their Kubernetes/OpenShift Bearer token to gain access to K10’s dashboard.

OIDC authentication

In this mode, K10 will interact with an OIDC provider such as Okta, Google, or Keycloak for example, so that the user can use their existing credentials with that provider to gain access to K10’s dashboard.

Red Hat OpenShift’s OAuth Proxy

A number of our customers use K10 for Kubernetes backup, DR, and application mobility with the OpenShift Kubernetes distribution from Red Hat.

In this ecosystem, the use of OpenShift’s OAuth proxy to authenticate users to access various applications deployed in their OpenShift clusters is extremely common with the OpenShift cluster configured with Keycloak as the OpenID Connect provider.

To cleanly support this workflow for OpenShift customers, we recently added support for accessing the K10 dashboard by authenticating using the OpenShift OAuth proxy.

Screenshots of the Authentication flow involving OAuth proxy

When the user navigates to the K10 dashboard, the request reaches the proxy. The proxy presents a login screen to the user.

OAuth proxy login

After clicking the login button, the user is forwarded to the OpenShift login screen. The screen will provide the option of selecting kube:admin or the OIDC option if it has been configured in the cluster

OpenShift login
Okta login

After clicking on the OIDC option, Okta in this example, the OIDC provider’s login screen is shown.

When authentication with the OIDC provider succeeds, the user is redirected to the K10 dashboard.

Deploying K10 with OAuth proxy

The instructions for deploying K10 with OAuth proxy have been documented in K10’s documentation here .

The following resources have to be deployed in order to setup OAuth proxy in the same namespace as K10.


Create a ServiceAccount that will be used by the OAuth proxy deployment

apiVersion: v1
kind: ServiceAccount
name: k10-oauth-proxy
namespace: kasten-io

Cookie Secret

Create a Secret that will be used for encrypting the cookie created by the proxy. The name of the Secret will be used in the configuration of the OAuth proxy.

oc --namespace kasten-io create secret generic oauth-proxy-secret \
--from-literal=session-secret=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c43)

ConfigMap for OpenShift Root CA

Create a ConfigMap annotated with the inject-cabundle OpenShift annotation. The annotation results in the injection of OpenShift’s root CA into the ConfigMap. The name of this ConfigMap is used in the configuration of the OAuth proxy.

apiVersion: v1
kind: ConfigMap
annotations: "true"
name: service-ca
namespace: kasten-io


Create a NetworkPolicy to allow ingress traffic on port 8080 and port 8083 to be forwarded to the OAuth proxy service.

kind: NetworkPolicy
name: allow-external-oauth-proxy
namespace: kasten-io
- ports:
- port: 8083
protocol: TCP
- port: 8080
protocol: TCP
service: oauth-proxy-svc
- Ingress


Deploy a Service for OAuth proxy. This needs to be annotated with the serving-cert-secret-name annotation. This will result in OpenShift generating a TLS private key and certificate that will be used by the OAuth proxy for secure connections to it. The name of the Secret used with the annotation must match with the name used in the OAuth proxy deployment.

apiVersion: v1
kind: Service
annotations: oauth-proxy-tls-secret
service: oauth-proxy-svc
name: oauth-proxy-svc
namespace: kasten-io
- name: https
port: 8083
protocol: TCP
targetPort: https
- name: http
port: 8080
protocol: TCP
targetPort: http
service: oauth-proxy-svc
sessionAffinity: ClientIP
timeoutSeconds: 10800
type: ClusterIP
loadBalancer: {}


Next, a Deployment for OAuth proxy needs to be created. It is recommended that a separate OpenShift OAuth client be registered for this purpose. The name of the client and its Secret will be used with the — client-id and — client-secret container arguments respectively shown in the Deployment spec below. The — client-id and — client-secret are defined in the OAuth client spec covered in the next section.

When an OpenShift ServiceAccount was used as the OAuth client, it was observed that the token generated by the proxy did not have sufficient scopes to operate K10. It is therefore not recommended to deploy the proxy using an OpenShift ServiceAccount as the OAuth client.

It is also important to configure the — pass-access-token option with the proxy so that it includes the OpenShift token in the X-Forwarded-Access-Token header when forwarding a request to K10.

The — scope configuration must have the user:full scope to ensure that the token generated by the proxy has sufficient scopes for operating K10.

The — upstream configuration must point to the K10 gateway Service.

apiVersion: apps/v1
kind: Deployment
name: oauth-proxy-svc
namespace: kasten-io
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
service: oauth-proxy-svc
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
creationTimestamp: null
service: oauth-proxy-svc
- args:
- --https-address=:8083
- --http-address=:8080
- --tls-cert=/tls/tls.crt
- --tls-key=/tls/tls.key
- --provider=openshift
- --client-id=oauth-proxy-client
- --client-secret=oauthproxysecret
- --openshift-ca=/etc/pki/tls/cert.pem
- --openshift-ca=/var/run/secrets/
- --openshift-ca=/service-ca/service-ca.crt
- --scope=user:full user:info user:check-access user:list-projects
- --cookie-secret-file=/secret/session-secret
- --cookie-secure=true
- --upstream=http://gateway:8000
- --pass-access-token
- --redirect-url=
- --email-domain=*
image: openshift/oauth-proxy:latest
imagePullPolicy: Always
name: oauth-proxy
- containerPort: 8083
name: https
protocol: TCP
- containerPort: 8080
name: http
protocol: TCP
cpu: 10m
memory: 20Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- mountPath: /service-ca
name: service-ca
readOnly: true
- mountPath: /secret
name: oauth-proxy-secret
readOnly: true
- mountPath: /tls
name: oauth-proxy-tls-secret
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: k10-oauth-proxy
serviceAccountName: k10-oauth-proxy
terminationGracePeriodSeconds: 30
- configMap:
defaultMode: 420
name: service-ca
name: service-ca
- name: oauth-proxy-tls-secret
defaultMode: 420
secretName: oauth-proxy-tls-secret
- name: oauth-proxy-secret
defaultMode: 420
secretName: oauth-proxy-secret

OAuth Client

As mentioned earlier, it is recommended that a new OpenShift OAuth client be registered.

The redirectURIs has to point to the domain name where K10 is accessible. For example if K10 is available at, the redirect URI should be set to

The name of this client must match with the — client-id configuration in the OAuth proxy deployment.

The Secret in this client must match with the — client-secret configuration in the OAuth proxy deployment.

The grantMethod can be either prompt or auto.

kind: OAuthClient
name: oauth-proxy-client
secret: "oauthproxysecret"
- ""
grantMethod: prompt

Forwarding Traffic to the proxy

Traffic meant for K10 must be forwarded to the OAuth proxy for authentication before it reaches K10. Ensure that ingress traffic on port 80 is forwarded to port 8080 and traffic on port 443 is forwarded to port 8083 of the oauth-proxy-svc Service respectively.

Here is one example of how to forward traffic to the proxy. In this example K10 was deployed with an external gateway Service. The gateway Service’s ports were modified to forward traffic like so:

- name: https
nodePort: 30229
port: 443
protocol: TCP
targetPort: 8083
- name: http
nodePort: 31658
port: 80
protocol: TCP
targetPort: 8080
service: oauth-proxy-svc

Reviewing the OpenShift Token

K10’s Authentication Service executes a Kubernetes Token Review using the OpenShift Token to verify that the token has been authenticated. If authenticated, the user is redirected to K10’s dashboard.

K10 also performs a Kubernetes Subject Access Review to verify that the token is authorized to operate K10. Depending on the scopes of the token, the user may have varying levels of access to the dashboard.


Users who are well versed with Red Hat’s OAuth Proxy, for authenticating users for accessing applications in their clusters, will find this article very useful. We hope that you will be as delighted as our OpenShift customers, who now have K10 fully integrated into their OpenShift environment for their cloud native data protection needs.

For users of OpenShift who are new to the OpenShift OAuth proxy, we would highly recommend learning more about this project using the information here, and also leverage the information in this article for deploying K10 with the OpenShift OAuth proxy.

And if you haven’t yet tried out K10 for yourself, please check it out and download a forever-free version of K10.

Originally published at

Onkar Bhat is an MTS at Kasten. His focus has been in the areas of authentication and authorization for multi-tenant and self-service data protection in Kubernetes. He previously worked at Big Switch networks, NetApp and Cisco. Onkar received his MS from Carnegie Mellon University.

Written by

Onkar Bhat is a software engineer at Kasten (, Kubernetes, Big Switch Networks, NetApp, Cisco, Carnegie Mellon University, RVCE.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store