Surely, the easiest way to get started with Kubernetes is to run
minikube start and not worry too much about configuring the cluster. Minikube does a great job abstracting the authentication process away, so you can start experimenting and learning about Kubernetes on your local machine quickly.
Sooner or later, you’ll realize that Kubernetes is more than just some upcoming hype. Depending on who you talk to it either is — or soon will be — the way to orchestrate your containers in production and elsewhere. So in production you might need a better approach to access control than what Minikube does in the background. However, because this technology is quite young, I’ve actually seen the opposite; inflexible, shared and way too permissive credentials in production.
In this post I want to compare some of the common authentication schemes for Kubernetes and show you just how easy it is to do it properly.
Is this where we talk about RBAC?
No, sorry to disappoint you there. Kubernetes does a very good job of separating authentication (making sure the person really is who they claim to be) and authorization (controlling what the authenticated person can access). In this article the focus is all on authentication. Authorization has gotten really good with the introduction of RBAC (i.e. Role-based Access Control) in Kubernetes. So, I believe, there’s not much to discuss; just use RBAC for authorization!
If you take a look at the Kubernetes docs, the number of authentication methods is quite large, but I want to focus on a subset:
- Static Password File
- Static Token File
- X509 Client Certificates
- OpenID Connect Tokens
Static Password Files: Possibly the worst of all Authentication Methods
If the word “static” doesn’t discourage you enough, I’ll try to outline why this method of authentication isn’t a good idea: As the name implies, a comma-separated-values files (csv) is configured in the Kubernetes apiserver. This file contains a plain-text password, a username, a user id and optional groups. So not just do we have to interact with the apiserver if we want to add a new user or change a password, but also are all password saved in plaintext!
Nevertheless, there are use-cases where I believe static password files can be appropriate: Anything that isn’t production. If you use Minikube on your local machine or on your CI server, I believe there’s nothing wrong with storing usernames and passwords in such a file. Also for learning about RBAC, where you need at least two users, I can recommend using those static files.
How can I implement Static Password Based Authentication?
There really isn’t too much to do. You create the file as mentioned above and configure the apiserver to use this file. If you like to see how this can be done with minikube, just watch the video to the left.
- very simple
- good for learning and trying out access control for Kubernetes
- contains plaintext passwords
- very inflexible
- new users can only be added if you have access to the apiserver configuration
How is Static Token Authentication different from Static Passwords?
It isn’t. At least not too much. When using static tokens instead of static passwords, you don’t specify a password in the file, but rather a token. If you were to interact with the Kubernetes API directly, you would now have to specify an
Authorization: Bearer <token> header, whereas before you would use an
Authorization: Basic <base64-encoded-username:password> header. However, the same advantages and disadvantages apply, these tokens still don’t have a real lifetime and adding new users is painfully inflexible.
Are x509 Client Certificates the answer?
By now you might have guessed by the structure of this post that I want to end with what I consider the best authentication method around. Since, we’re not in the last section yet, this method can’t be the best one. Having said that, there is a major advantage of using client certificates over static files.
With this method, each user receives their own certificate and the Kubernetes apiserver will accept (or deny) them. However, the apiserver doesn’t know about the individual certificates and therefore also doesn’t know about the individual users. Instead the apiserver knows the the certificate authority (CA) which was used to sign the individual client-certificates. This means, we have now managed to decouple the user management from operating the Kubernetes cluster.
This method is by the way also the one used by Minikube by default. It configures your
kubectl to use a x509 certificate and trusts the CA which was created to sign the certificate. This is why it feels like there’s no authentication happening at all when starting Minikube without any parameters.
- operating the Kubernetes cluster and issuing user certificates is decoupled
- much more secure than basic authentication
- x509 certificates tend to have a very long lifetime (months or years). So, revoking user access is nearly impossible. If we instead choose to issue short-lived certificates, the user experience drops, because replacing certificates involves some effort.
So, how can Open ID Connect solve all of our issues?
Wouldn’t it be great if we could have short-lived certificates or tokens, that are issued by a third-party, so there is no coupling to the operators of the K8s cluster. And at the same time all of this should be integrated with existing enterprise infrastructure, such as LDAP or Active Directory.
This is where OpenID Connect (OIDC) comes in. If you haven’t heard about OpenID Connect, here’s a link to a youtube video in which I try to explain OpenID Connect using the Kubernetes apiserver as an example.
OpenID connect builds on top of the OAuth2 standard. Where OAuth2 was originally designed as an authorization scheme, OpenID adds an identity (i.e. authentication) scheme on top of it.
Similar to the certificate approach the token issuer — which is completely decoupled from the Kubernetes cluster — can sign individual tokens. Because the Kubernetes apiserver can discover the token issuer’s public keys it can check the validity of user token without creating a runtime dependency to the token issuer.
User tokens — they’re called
id_tokens in OpenID Connect — are designed to have a very short lifetime (minutes rather than hours). However, if user access was not revoked in the meantime they can be quite easily renewed using
refresh_tokens . Additionally, OAuth2 was designed to allow for a frequent rotation of the token issuer’s key pair as well, so even if your token issuer is breached, the damage can be somewhat limited.
Play around with your own Token Issuer and Minikube
It might sound like a this is a tedious task to setup or hard to configure. But it really isn’t. In the video below I’m configuring all of this with a locally-run setup:
For my example, I’ve used Keycloak as a token issuer. Keycloak is both a token issuer and an identity provider out-of-the box and quite easy to spin up using Docker. But you could also use CoreOS’s dex or even implement your own OIDC token issuer if you feel like it.
- very secure
- short token lifetimes
- great UX (DX) possible
- no runtime coupling between OIDC provider and Kubernetes cluster
- If you’ve never worked with OAuth2 and OIDC before, it might be a bit hard to understand. But luckily, there’s Youtube. ;-)
Conclusion and tl;dr
tl;dr: Static methods are very static. OIDC is the most flexible and most secure one. It might seem difficult to setup, but it really isn’t. It will integrate very-well with your existing Identity and Access Management solutions.
I’ve compared different Kubernetes authentication methods and ended up preferring the one that uses OpenID Connect tokens. But really, I don’t feel like this is Kubernetes-specific at all. OpenID Connect is a system that in my opinion really makes sense and should be considered as the one and only authentication scheme wherever it is supported. Luckily, Kubernetes supports it and it isn’t too difficult to set up.
Static methods (static password or static token file) are fine for local testing and experimenting, but they really don’t scale at all. If you’re looking for a scalable solution, OIDC is the way to go.
We haven’t discussed methods such as authentication webhooks (which have a runtime dependency to the access control server) or an authentication proxy. These are outlined in the Kubernetes docs, so feel free to check them out if you’re interested in them. I can completely understand if you stop reading after Open ID Connect, though.