How to utilise X509 Client Certificates & RBAC to secure Kubernetes
How We Effectively Managed Access to Our Kubernetes Cluster
In most organisations, adopting Kubernetes starts with developers experimenting and then running a Proof of Concept. They then spread the word, and decision-makers start getting interested and see the value. We had a similar path to our Kubernetes journey. We started with a Proof of Concept for our small five-member development team, ran a pilot with a little six-node Kubernetes cluster, and soon the management became interested in the platform and wanted to adopt it across.
Well, doing a Proof of Concept is one thing, and running Kubernetes for a sizeable multi-discipline organisation is another beast. Managing Kubernetes within the small group wasn’t tricky, and we started with sharing the admin.conf file. This file gives root access to anyone using the cluster. Therefore, this was not going to work with multiple teams working separately. We needed to do some serious thinking regarding organising, controlling access, and securing the cluster. If you don’t secure your Kubernetes cluster correctly, terrible things may happen, and it can not only hamper a company financially but also result in a loss of reputation and potential lawsuits no admin would want.
Authenticating Users with Kubernetes
Kubernetes does not have a way to manage external users directly and relies on providers such as LDAP or Active Directory. Though there are many ways to control access, the best way to authenticate with the Kubernetes API server is to use X509 TLS Client Certificates. Kubernetes relies on Mutual SSL Auth to authenticate clients and authorises them if they present a certificate signed by the Cluster certificate authority. To identify users, Kubernetes uses the common name (CN) of the client certificate; for groups, the Organisation field (O).
Below is how Kubernetes authenticates clients using X509 TLS Certificates

For Kubernetes to Authenticate clients using X509 Client certificates, the Cluster Certificate Authority needs to sign the certificate. Below is how the certificate signing process works

The best practice for managing multiple users is to group users using the Organisation field, rather than just relying on the CNs. The above process describes how the cluster-admin added Bob to the “web-dev” group to identify him with the Web Development team. That helps us to group users according to the roles they perform, and then assign the group relevant RBAC permissions authorising them to do what they are supposed to do for meeting their daily responsibilities. You do not need to assign RBAC to every new team member, and they get all relevant privileges automatically just by being a part of the group. It saves a lot of admin overhead and provides consistency. Once you have assigned the correct permissions to a group, you are future proof.
Our organisation was running an Active Directory server, and it was the best place to hook our Kubernetes. Once we have the users and groups from active directory, we could then generate certificates for each user according to the above process.
Authorising Users with Kubernetes
Authenticating users is one thing and authorising them is another. Kubernetes has out of the box Role-Based Access Control to permit Users, Groups and Service Accounts to perform actions within your Kubernetes Cluster.
Kubernetes has two kinds of Roles, “Role” and “ClusterRole”. A Kubernetes “Role” is a namespace resource. That means a Role allows changes only to the namespace resources. A Kubernetes “Cluster Role” on the other hand, helps manage Kubernetes on a Cluster Level such as manipulating and listing nodes and listing pods on all namespaces.
Most users can do with simple roles, and cluster roles should be reserved only to users such as cluster and network admins.
We followed the principle of least privilege in our company. That means we provided only the required accesses to a group. We ensured there was no privilege escalation, and all the team members have only the permissions they need to do their work — nothing more and nothing less.
Below is how we structured our Kubernetes cluster :

The organisation had five teams:
Web Development Team — They are responsible for delivering the Web Application. We created a separate namespace for them called “web”, a Role called “web-dev” having access to spin all resources in the web namespace except Ingress, NodePort and LoadBalancer services.
Middleware Development Team — They take care of the middleware components such as APIs to Integrate the Web Layer with the Database backends. We created a separate namespace for them called “middleware”, a Role called “middleware-dev” having access to spin all resources in the middleware namespace except Ingress, NodePort and LoadBalancer services.
Database Administrators — They manage backend components such as Databases. We created a separate namespace for them called “database”, a Role called “database-admin” having access to spin all resources in the database namespace except Ingress, NodePort and LoadBalancer services.
Cluster Admins — These are System admins responsible for maintaining and managing the Kubernetes cluster as a whole. We created a “cluster-admin” Cluster Role for them, and they had root access to the cluster.
Network Admins — These are Network admins responsible for maintaining the organisation’s network infrastructure. We created a “network-admin” Cluster Role for them, and they had access only to spin up Ingresses and Services.
It is worth noting we bound the Groups with the Roles/Cluster Roles using Role/Cluster Role Bindings. That is because we wanted to manage accesses at a group level as discussed.
How we managed to do it
Well, its time now for a short demonstration on how you can implement this. Let us start by issuing certificates to a Web Development Team member “Bob”. Then define the Role for the team, and Bind the Role to the “web-dev” group using a Role Binding. Finally, let’s do a quick smoke test to find out if Bob has the required access.
Issuing user certificates
Let us first start with issuing certificates to Bob. Ensure you connect to your Kubernetes cluster as a Cluster Admin.
Generate a key and a CSR for Bob
$ openssl req -new -newkey rsa:4096 -nodes -keyout bob-kubernetes.key -out bob-kubernetes.csr -subj "/CN=bob/O=web-dev"Generating a RSA private key
..............................................................................................................................................................................................................................++++
................++++
writing new private key to 'bob-kubernetes.key'
Create a Certificate Signing Request with Kubernetes
Base64 encode the generated CSR file.
$ cat bob-kubernetes.csr | base64 | tr -d '\n'
Include the output in the below yaml
$ vim bob-kubernetes.csr
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: bob-kubernetes-csr
spec:
groups:
- system:authenticated
request: <BASE64 encoded csr>
usages:
- client auth
Create the CSR using kubectl
$ kubectl create -f bob-kubernetes-csr.yamlcertificatesigningrequest.certificates.k8s.io/bob-kubernetes-csr created
Check the CSR status.
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
bob-kubernetes-csr 11m bharatmicrosystems@gmail.com Pending
Kubernetes has created the CSR and is now pending, and you would need to approve the CSR.
Approve the CSR
$ kubectl certificate approve bob-kubernetes-csr
certificatesigningrequest.certificates.k8s.io/bob-kubernetes-csr approved
Recheck the status, and you would see the CSR is approved and the certificate is issued
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
bob-kubernetes-csr 11m bharatmicrosystems@gmail.com Approved, Issued
Retrieve Bob’s certificate
Retrieve Bob’s certificate and verify it to see if it is issued correctly.
$ kubectl get csr bob-kubernetes-csr -o jsonpath='{.status.certificate}' | base64 --decode > bob-kubernetes-csr.crt
$ cat bob-kubernetes-csr.crt
-----BEGIN CERTIFICATE-----
MIID6jCCAtKgAwIBAgIRANoImKoaCgLChvYWaxkvbt4wDQYJKoZIhvcNAQELBQAw
LzEtMCsGA1UEAxMkMTIyNWE4ZTMtYmYxOS00YmU1LTg0MzItMGNlODI4ODAxNTdh
yX98TghQAH6Sqz553UTDR2AMe4CvJEcVnBiPPf3X3qvYNA1vpCurQJsdBfnOURy9
6Zto2J3vTAj/qFSC9EGcV3Gi02i4ksSrHEH+TBX/admmJBtWUHiSmuQ3IjN5Nlj4
................truncated output................................
ebKliem++qo6aMrbKLQXXEcjxECzc6hOPWdQgAa8Pqe9YRGw4CUxTJ/mppE5PXjs
N/QERhNXZGwdCUgug1cUlOAa8YIMvvL9TA5ZaJ+oHskPlWejg4wRpkdlqce3jCx5
RSmlTnv3LN5qKTdt9TQCcB92YLC3NxaiQj0UcxSyMvAIcGS7e9duNyOZgIMuobF9
+AJMaUr1WRrgY+jF1G+aVpcuDYgZlhkCAShmQYUi4yUop5p3gnlHwl8359KCrw==
-----END CERTIFICATE-----
Get the Cluster CA Certificate
For Bob to authenticate with the cluster, Bob’s kubectl client needs to trust the server. To do so, we would require the Cluster CA certificate.
$ kubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - > kubernetes-ca.crt
Creating the Kubeconfig file for Bob
Now, as we have all the required artefacts in place, we would create a Kube-config file for Bob.
$ kubectl config set-cluster $(kubectl config view -o jsonpath='{.clusters[0].name}') --server=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=kubernetes-ca.crt --kubeconfig=bob-kubernetes-config --embed-certs
Cluster "kubernetes" set.
Set the credentials and context
If you look at the bob-kubernetes-config, you find the contexts and users fields are empty. Let us set the relevant credentials and context to the user.
Let us start by setting the credentials.
$ kubectl config set-credentials bob --client-certificate=bob-kubernetes-csr.crt --client-key=bob-kubernetes.key --embed-certs --kubeconfig=bob-kubernetes-config
User "bob" set.
Since we want to provide access to Bob only on the web namespace, we need to set the web namespace in Bob’s context. That defaults the web namespace to Bob.
$ kubectl config set-context bob --cluster=$(kubectl config view -o jsonpath='{.clusters[0].name}') --namespace=web --user=bob --kubeconfig=bob-kubernetes-config
Context "bob" created.
The Kube-config file for Bob is now ready and can be distributed to Bob securely.
Testing the configuration
We are all set now and ready to test the configuration. Use Bob’s Kube-config file and switch context to it.
$ kubectl config use-context bob --kubeconfig=bob-kubernetes-config
Switched to context "bob".
Then run a kubectl version with Bob’s Kube-config file, and you should get the version details. That shows Bob can authenticate with the Kube API Server.
$ kubectl version --kubeconfig=bob-kubernetes-config
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.9", GitCommit:"2e808b7cb054ee242b68e62455323aa783991f03", GitTreeState:"clean", BuildDate:"2020-01-18T23:33:14Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-gke.5", GitCommit:"a5bf731ea129336a3cf32c3375317b3a626919d7", GitTreeState:"clean", BuildDate:"2020-03-31T02:49:49Z", GoVersion:"go1.12.17b4", Compiler:"gc", Platform:"linux/amd64"}
Try to list pods using Bob’s Kube-config file.
$ kubectl get pods --kubeconfig=bob-kubernetes-config
Error from server (Forbidden): pods is forbidden: User "bob" cannot list resource "pods" in API group "" in the namespace "web"
What’s wrong? Well we have provided Bob access to the cluster, and the Kube API Server authenticates him, but he does not have any authority as we haven’t set up RBAC for his group.
Set up RBAC for Web Developers
Though we have issued the Kube-config file successfully to Bob, he does not have permissions to access the cluster. Though he was able to authenticate with the Kube API server successfully, he wasn’t able to perform any actions within the cluster. L now set up RBAC for Bob’s group.
Create a Namespace
Start by creating a namespace “web” for Web Developers to deploy and manage their workloads.
$ kubectl create ns web
namespace/web created
Create a Role
Create a Role for the web-dev group to access all web namespace resources.
$ vim web-dev-role.yaml
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: web-dev
namespace: web
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]$ kubectl apply -f web-dev-role.yaml
role.rbac.authorization.k8s.io/web-dev created
Create a Role Binding
Bind the “web-dev” group with the “web-dev” role using a RoleBinding.
$ vim web-dev-rb.yaml
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: web-dev-rb
namespace: web
subjects:
- kind: Group
name: web-dev
namespace: web
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: web-dev$ kubectl apply -f web-dev-rb.yaml
rolebinding.rbac.authorization.k8s.io/web-dev-rb created
Smoke Test
As we have now assigned the correct Role to the “web-dev” group, it is time to run some tests.
List pods using Bob’s Kube-config file
$ kubectl get pods --kubeconfig=bob-kubernetes-config
No resources found.
Congratulations! you have successfully configured RBAC for Bob as he can now list pods on his cluster
Let’s try something else. Can we use Bob’s Kube-config file to list the nodes?
$ kubectl get nodes --kubeconfig=bob-kubernetes-config
Error from server (Forbidden): nodes is forbidden: User "bob" cannot list resource "nodes" in API group "" at the cluster scope
No! We expected it, as Bob just has access to the web namespace. Let us try out a few other things. How about listing pods from all namespaces?
$ kubectl get pods --all-namespaces --kubeconfig=bob-kubernetes-config
Error from server (Forbidden): pods is forbidden: User "bob" cannot list resource "pods" in API group "" at the cluster scope
No! We expected it as well. Bob just has access to the web namespace pods.
Let’s see if Bob can create a deployment within the web namespace.
$ kubectl create deployment nginx --image=nginx --kubeconfig=bob-kubernetes-config
deployment.apps/nginx created
And Kubernetes has created the NGINX deployment. Let us now list it.
$ kubectl get deployment nginx --kubeconfig=bob-kubernetes-config
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 23s
We see that NGINX is now running successfully. Let us try to update the deployment with a different image.
$ kubectl set image deployment/nginx nginx=1.16 --kubeconfig=bob-kubernetes-config
deployment.extensions/nginx image updated
Test if Bob can delete the NGINX deployment
$ kubectl delete deployment nginx --kubeconfig=bob-kubernetes-config
deployment.extensions "nginx" deleted
List the deployment one more time to see if the operation was successful.
kubectl get deployment nginx --kubeconfig=bob-kubernetes-config
Error from server (NotFound): deployments.extensions "nginx" not found
And yes it was. That shows we have successfully configured RBAC within the cluster and were able to provide Bob with the appropriate permissions to access it and do his day to day job. We have also ensured Bob has no access to any cluster-level resource or any other namespace.
Conclusion
Thanks for reading through. I hope you enjoyed the article. That was just a story on how we were able to enforce RBAC effectively within our organisation. Of course, there are no hard and fast rules, and you might want to do things differently, but remember the principle of least privilege is what you need to follow.