Kubernetes Day 2 Operations: AuthN/AuthZ with OIDC and a Little Help From Keycloak
Quick note: if you already know about OIDC and just want to get minikube setup with Keycloak, feel free to skip down to the bottom.
SO, you’ve experimented with Kubernetes, rolled out some deployments, tested integration with your company CI/CD and are now considering what steps must be taken to bring Kubernetes into production. Most of these steps tend to fall into what are considered ‘Day 2’ operations: gaining observability (metrics and logging), thinking about backup and recovery, and of course the two big A’s: Authentication and Authorization.
Coming from previous experiences in managing applications and platforms; Kubernetes itself does not have users and groups as one would normally think about them. There are service accounts, roles, role bindings and special systems groups, but none of the classic ‘users’ or ‘groups’. These are instead delegated to some other system through one of several means: a static password or token file, x509 certificates, an authenticating proxy, OpenID Connect (OIDC), or some form of webhook token authentication. Of these methods, the most common and well supported approach is using OIDC.
Even if you are unfamiliar with OpenID Connect, you have more than likely used it in some way, shape or form. Have you been to a site that offers ‘Social Login’? — meaning, logging in with your Google, Facebook, GitHub or other account? If so, you’ve used OIDC.
OpenID Connect became a standard in early 2014, and functions as an extension of the OAuth2 authorization framework. It adds both identity and establishes a common scheme for user attributes to be passed via the Oauth2 protocol. These attributes are shared in the form of a JSON Web Token (JWT). The cool thing about this protocol is that the tokens themselves are digitally signed by the Identity Provider, and just like with a PGP-signed email the integrity of the message is easily verified without having to heavily integrate with the provider.
Both OpenID Connect and OAuth2 warrant their own posts, but for the sake of brevity I’ll be including a quick summary and provide links to a few good resources at the bottom of this post. I highly recommend checking them out and doing a bit further reading before putting anything into production.
So, for the purposes of this post, think of it like this: I, Bob (user) have obtained a passport (id-token) from the United States (Identity Provider). I would like to visit Barcelona for KubeConEU 2019. The Spanish Government (Kubernetes or the ‘client’) has established a trust with the US Government to reliably identify their citizens and will allow me entry to their country to attend the event.
It’s a little more complicated than that, but it gets the idea across. In truth, the Identity Provider issues you three tokens. A refresher-token, an id-token, and an access-token. The access-token can generally be disregarded for Kubernetes. It would be used if the Identity Provider was managing permissions, but that is done in Kubernetes itself with RBAC. The id-token is short lived (minutes) and the refresher-token generally has no expiration. The refresher-token in turn is used to fetch a new id-token when the id-token expires. This combo of refresher and id-token minimizes the risk of the id-token itself getting compromised.
The Kubernetes cluster itself is configured to trust the users verified by the Identity Provider, and does NOT require the person to authenticate to Kubernetes directly. This has allowed the Kubernetes project to focus on well…Kubernetes, and not have to directly support the many methods of authentication and authorization. Supporting things such as Active Directory or LDAP is now the responsibility of the Identity Provider.
Now, there are quite a few options to choose from in the Identity Provider space. If you’re already hosted in a cloud provider, there’s a good chance you can tie into their system directly. If you aren’t or have other requirements there are several other options available to you, such as Auth0, Dex, Gluu, Keycloak, Okta and many more. Your choice is completely dependent on your Organizational and Compliance needs.
There are a few key terms to be aware of that will come into play with the configuration of the OIDC endpoint under just about any one of the providers:
- Client ID — The public unique name for this OIDC configuration. All tokens will be issued for this ID.
- Client Secret — A shared secret used to authenticate the the client or application (kubectl or kubernetes) and the Identity Provider.
- Issuer URL — The address of the OIDC Identity Provider.
- Redirect URL — A URL to redirect the user to after successful authorization.
- Scope — A request for access (permission) by a client or application to information about the identity. These are the messages you see when you login with a Social Login and the app requests permission to access your email, name etc.
- Claim — The actual attributes attached to the identity. These are the attributes the scope is requesting access e.g. your name, your email address etc. They can be extended to contain a list of groups the identity belongs to, or other seeded information. OIDC has a standard set of profile claims that are widely supported.
The Kubernetes configuration itself is quite simple, and explained well in the Authenticating portion of the docs.
The gist of it is, the kube-apiserver must be reconfigured with some additional information regarding the oidc endpoint we intend to use. The options are passed as command-line parameters to the kube-apiserver.
- oidc-issuer-url — URL of our OIDC Identity Provider.
- oidc-client-id — The unique name for this client, generated by your OIDC provider.
- oidc-username-claim — This is
subby default, but
subcan vary depending on your OIDC provider, or may not be friendly (e.g. a uuid). Other friendlier claims will have the full
oidc-issuer-urlprepended to the claim name. The exception for this is the
- oidc-username-prefix — A string thats inserted in front of the username to both signify that it’s an OIDC user and prevent possible clashing with an account that’s already present. Default is
- oidc-groups-claim — The name of the claim to map to groups within Kubernetes.
- oidc-groups-prefix — String that is inserted in front of the group name to prevent clashing. Default is
- oidc-ca-file — Path to the CA certificate that signed the certificate of the Identity Provider.
Obtaining a Token and Configuring Kubectl
Once you have your Identity Provider and kube-apiserver configured, it’s time to get yourself a token! Kubectl itself does not know how to obtain a token, however how you go about getting one tends to fall into one of three possible paths:
Web Based Helper
You use some helper application requiring you to login to a website, and it either gives you the token to use or generates the
kubeconfig for you automatically.
CLI Based Helper You use a cli application or script and pass your credentials directly. This method is generally considered less secure, but it is more commonly used when you’re running everything yourself (e.g. Keycloak, Dex or Gluu backed by AD/LDAP). Note that it does require that your Identity Provider supports what is known as the ‘Resource Owner Password Credential’’ (ROPC) grant which enables the passing of the credentials directly.
Last Resort: Manually You don’t have a helper app, but do have some other method of obtaining the token information. You can configure your kubeconfig manually by adding a user with the below command:
kubectl config set-credentials <username> \ --auth-provider=oidc \ --auth-provider-arg=idp-issuer-url=<oidc-issuer-url> \ --auth-provider-arg=client-id=<oidc-client-id> \ --auth-provider-arg=client-secret=<oidc-client-secret> \ --auth-provider-arg=id-token=<oidc-id-token> \ --auth-provider-arg=refresh-token=<oidc-refresher-token> \ --auth-provider-arg=idp-certificate-authority=<path-to-idp-cert>
Great! Now that you have your token and a user configured, create a context to use it and switch to it and you should be good to go!
$ kubectl get pods Error from server (Forbidden): pods is forbidden: User "oidc:<email_address>" cannot list pods in the namespace "default"
Except…not quite. Kubernetes (rightfully) uses a deny-first authorization policy.
Role Based Access Control (RBAC)
Before you can actually use an OIDC based user or group, you must first authorize them by attaching them to a
[Cluster]Role via a
[Cluster]Role Binding. Now, RBAC itself is another big topic, and too long to get into the nitty-gritty; so there’ll only be a bit of high level overview here. However, it certainly something that needs to be reviewed and configured before opening up your cluster to multiple groups.
All actions taken against the Kubernetes API (so..everything) are evaluated against the cluster RBAC policies. As mentioned earlier, these are a combination of a
[Cluster]RoleBinding. Where the Role is a representation of a set of permissions, and the Role Binding attaches users or groups to that set of permissions. Roles and Role Bindings are scoped to a namespace, whereas Cluster Roles and Cluster Role Bindings are considered global.
The rules for these permissions are additive and grant certain actions. These actions map to HTTP verbs that can be performed on a set of resources or objects. Those resources are inherited from API Groups specified in the rule definition. This sounds complicated, but it isn’t too bad. Below is an example from the Kubernetes Using RBAC Authorization Docs:
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"]
The above role simply allows the reading (
pods within the
Kubernetes has several out-of-the-box ‘user-facing’ roles that can be useful for getting started. These roles include
view are meant to be used within a namespace;
cluster-admin is simply super-user for the entire cluster (equivalent to the default created admin user). Details on these roles can be found in the User-facing Roles Section of the Using RBAC Authorization documentation.
With a [cluster]role in hand, our OIDC users and groups can now be bound to it via a [Cluster]RoleBinding. Bindings are pretty simple, you have two main sections the
roleRef and the
roleRef is a reference to either the
ClusterRole you’re wanting to bind your users to, and we list our users via the
subjects array accepts three kinds of subject references:
ServiceAccount. For our purposes here, we only care about
Group. If you recall earlier, there was an option to specify a prefix for our users and groups in the kube-apiserver flags. The users and groups we’re referencing in our Role Binding MUST use this prefix when referencing their name e.g.
oidc:/cluster-admins. As always, a good example can convey this much better than a few words:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: oidc-cluster-admins roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: oidc:firstname.lastname@example.org - apiGroup: rbac.authorization.k8s.io kind: Group name: oidc:/cluster-admins
Okay, after all that..
kube-apiserver is configured…check, token obtained and kubectl configured…check RBAC good to go…check
$ kubectl get pods NAME READY STATUS RESTARTS AGE keycloak-0 1/1 Running 2 1d postgresql-0 1/1 Running 2 1d
Woot! We’re good!
With all that done you should have your cluster tied into an external Identity Provider. Yay!
Now, I know what you’re thinking: It’s never that easy. Truthfully there is a bit more to it, and a good walkthrough or example can tell you quite a bit more than some ramblings of a random person on the internet.
It’s a simple little bash script wrapper around minikube that will automatically generate the needed certs, inject them into the minikube VM, deploy an instance of keycloak and adjust some of the settings on the virtualbox VM to make it easy to use locally.
2 ) Clone the oidckube repo (https://github.com/mrbobbytables/oidckube) and
cd into the project directory.
3 ) Execute
./oidckube.sh init. This will take a few minutes. It goes through the process of creating a new CA and generating/injecting the needed certs and deploying an instance of Keycloak along with PostgreSQL as its backing datastore. When its completes, it will give you a message about adding an entry to your hosts file looking similar to this:
192.168.99.100 keycloak.devlocal. This will be a local DNS named assigned to the VM.
4 ) Add the echo’ed entry referenced above to your host file.
5 ) Visit
https://keycloak.devlocal in your favorite browser. You can ignore the cert warnings, its just the locally generated cert created in the init process. NOTE: If you get a
503 Error Keycloak needs a little bit more time to come up, check back again in a few seconds. If everything is happy, you should see something similar to the below:
6 ) Click the link to the ‘Administration Console’ and login with the credentials:
7 ) Hover over ‘Master’ in the top left corner until the ‘Add Realm’ Button appears. Then click it.
8 ) From here, select ‘Import’ and ‘Browse File’. Then goto the project directory and open the file
k8s-realm-example.json. It should then look like this:
9 ) Click ‘Create’. This will now take you to the K8s Realm within the Keycloak instance.
10 ) From the K8s Realm, click on ‘Clients’ from the menu on the left-hand side. Then select ‘oidckube’. Finally, click on ‘Credentials’ on the top menu. You should be at a screen similar to the below:
11 ) Click on the ‘Regenerate Secret’ Button, then copy the secret.
oidckube will be your
client-id and the string you just copied will function as your
NOTE: If you you feel like looking around a bit, take a look at the ‘Mappers’ tab on the top menu. This is the section that maps data stored or federated through Keycloak to OIDC claims. The example Realm config you imported already configured the
groups claim and added a Hardcoded claim for
email_verified attribute is there due to the current bug kubernetes/kubernetes#59496 which makes it a requirement when using the email claim for the
12 ) Open up the
config file in the project directory. There should be four variables. Set the
KEYCLOAK_CLIENT_SECRET to the string you just copied and save the file. This will be used later when minikube is reconfigured to use OIDC and for requesting a token with the helper script.
13 ) Back in Keycloak, click on ‘Users’, in the left-hand menu. We’re going to add two users:
user. These users will be mapped to two groups
cluster-users; which have
ClusterRoleBindings pre-configured (keycloak-cluster-admins and keycloak-cluster-users) from when we init’ed oidckube.
14 ) Press the ‘Add User’ button on the right hand side. Set the username to
admin and the email to
email@example.com. Then press ‘Save’.
15 ) At the next screen click on the ‘Credentials’ tab at the top, then give the user a new password like
keycloak, and uncheck the ‘Temporary’ toggle. Then press the ‘Reset Password’ button.
16 ) Next, click on the ‘Groups’ tab at the top. You should see two groups in the ‘Available Groups’ section on the right. Select
cluster-admins and press the ‘Join’ Button.
17 ) Repeat steps 13–16 but this time for the
user user, and use
firstname.lastname@example.org for the email, and join the user to the
OPTIONAL — Add a TOTP Password: If you would like to add a TOTP token to the users, from the ‘Users’ menu, click ‘View All Users’, then select ‘Impersonate’ for the user you wish to set the TOTP token. This will open a new window as that user. You can then click on the ‘Authenticator’ link on the left-hand menu and add the new TOTP to your phone or other authenticator device.
18 ) With Keycloak now configured, stop the VM with
19 ) Start the instance back up with
./oidckube.sh start. This starts minikube back up with the needed oidc flags for the
kube-apiserver. Then give it some time for the VM to boot and Keycloak to start.
20 ) From within the project directory, execute
./login.sh. It will prompt you for your
password, and optional
TOTP token. Then add the user (by the email address) to your local
21 ) Create a context using the new user and switch to it:
$ kubectl config set-context oidckube-admin \ --cluster=minikube \ --email@example.com \ --namespace=default
22 ) Switch to the new context and you should be able to perform all the actions as a cluster-admin!
$ kubectl config use-context oidckube-admin
At this point you can repeat the process with the
firstname.lastname@example.org user and poke around with
If you want to take peak at the RBAC policies that enabled these rights, they’re in the you can view them here:
With that you’ve now successfully setup OIDC and basic RBAC within a Kubernetes cluster. See? Not so bad! Now you can move on and put it to good use by integrating it with your organizational LDAP or AD.
Oauth and OIDC Links:
- An Introduction to Oauth2 — Mitchell Anicas, Digital Ocean
- OAuth 2.0 and OpenID Connect (in plain English) — Nate Barbettini, Okta
- Identity, Claims, & Tokens — An OpenID Connect Primer, Part 1 of 3 — Micah Silverman, Okta
- OIDC in Action — An OpenID Connect Primer, Part 2 of 3 — Micah Silverman, Okta
- What’s in a Token? — An OpenID Connect Primer, Part 3 of 3 — Micah Silverman, Okta