You may have heard people refer to Kubernetes as API centric. That is, what happens in the cluster revolves around a core component in the control plane (or master node) known as the API Server. The API server is like a gatekeeper for your entire cluster. If you want to CRUD (Create, Read, Update, Delete) any Kubernetes objects, it has to go through this API. The API Server validates and configures the API objects such as pods, services, replication controllers and deployments. All of the interaction that takes place between the different clients and the API Server are REST based in order to fulfil the various CRUD operations. The clients interacting with the API Server range from the engineers using the kubectl CLI to nodes which have the kubelet (a node agent) running. Furthermore, the rest of the control plane, that is the scheduler and the controllers, are always talking to it (the API Server) as well.
With such an important component that is central to the behaviour and state of the cluster, any incoming requests that are received need to be validated to prove the authenticity of the requester, as well as checked to ensure that the requester is actually allowed to perform the operations included in the request.
Understanding Authentication & Authorization around the API Server
Authentication is essentially about proving that you are who you say you are. Kubernetes does not have an internal system for storing and managing user accounts. Users are created and managed outside of the cluster. So how does the authentication flow work? Incoming client requests have credentials embedded or attached to them which then get passed off to an external authentication module or system which validates the user. Amazon EKS uses aws-iam-authenticator for the authentication layer which is a tool that uses IAM credentials to authenticate to the cluster.
On the other hand, system components such as kubelets, pods, and other components of the control plane also need to authenticate with the API Server. Every pod and every member of the control plane that interacts with the API server gets associated with a Service Account. A Service Account is the identity that they use when authenticating with the API Server.
Authorization is the step that follows the successful validation of a user and it deals with what that user is allowed to do in relation to the different resources on the cluster. Role-based access control (RBAC) is the method of regulating user access.
Who can perform which actions on which resources?
If you want a user to be able to carry out various actions or operations on the cluster, you will have to create some allow rules. When you create a new Kubernetes cluster, your cluster creator (user or role) is given all the keys to the kingdom with admin privilege, even though RBAC denies everything by default.
Securing API Server operations with RBAC
RBAC authorization uses the
rbac.authorization.k8s.io API group and consists of four API objects: Role, ClusterRole, RoleBinding and ClusterRoleBinding.
- Role — Used to determine which operations can be carried out on which resources in a specific namespace.
- ClusterRole — Used to determine which operations can be carried out on which resources in the scope of the entire cluster.
- RoleBinding — Used to determine which users or service accounts are authorized to carry out operations on resources in a given namespace as specified in a Role.
- ClusterRoleBinding — Used to determine which users or service accounts are authorized to carry out operations on resources across the scope of the cluster as specified in a ClusterRole.
Map IAM Users & Roles to the EKS Cluster
To map IAM users and roles to k8s users in the EKS cluster, you have to define them in the
aws-auth ConfigMap which should exist when after creation of your cluster. To view the existing
aws-auth ConfigMap in your cluster, run the following command:
kubectl describe configmap -n kube-system aws-auth
This should produce the following response:
- rolearn: <ARN of instance role (not instance profile)>
To add an IAM user or role to the cluster, you would modify this ConfigMap by adding the respective ARN and a Kubernetes username value to the mapRole or mapUser property as an array item. Below is an example with an additional IAM role and user added to the cluster.
- rolearn: arn:aws:iam::<account-id>:role/<cluster-name>
- rolearn: arn:aws:iam::<account-id>:role/ops-role
- userarn: arn:aws:iam::<account-id>:user/lukas-rbac-user
Creating RBAC Objects for Authorization
We’ve already covered the theory of RBAC objects, so we’re going to create a Role and a RoleBinding for the users in our EKS cluster.
In the file below, I create two Roles that permit any users attached to them to perform the following actions in the default namespace.
- The ops-actions Role — Any users attached to this Role can perform all actions as specified by the wildcard (*) in the verbs property. Furthermore, they can make all HTTP verb requests to the resources in the specified apiGroups, namely the core API group and the apps API group.
- The engineer-actions Role — Any users attached to this Role can perform read actions (get and list) as specified in the verbs property. Furthermore, they can make these HTTP GET requests to all the pod and services resources of the core API Group.
- apiGroups: ["", "apps"] # "" indicates the core API group
- apiGroups: [""] # "" indicates the core API group
The next step is to create the respective RoleBindings for the Roles defined above. In the code below, we list the users in the subjects property of the RoleBinding object and attach the RoleBinding to the relevant Role.
- kind: User
- kind: User
You can add these code snippets to a single file or multiple files depending on your preference. Once you have done so, you can apply these changes to your Kubernetes cluster with the following command:
kubectl create -f <file-name>.yaml
After applying these changes, ensure that they have been created by running the following commands:
kubectl get roles # gets roles created for the default namespace
kubectl get rolebindings # gets rolebindings for the default namespace
Testing Roles & RoleBindings with IAM Profiles
Assuming you saved the credentials for the IAM user profile and role that you created (in this example ops-role and lukas-rbac-user), you can update your AWS CLI config and credentials files accordingly. If you’ve configured a default profile with your AWS CLI then these files can be found in the following location:
Update your credentials file with the IAM user profile access key and secret key.
Update your config file with the IAM user profile and the IAM role that will be used to connect to cluster.
# /.aws/config[profile lukas-rbac-user]
region = eu-west-1[profile ops-role]
role_arn = arn:aws:iam::<account-id>:role/ops-role
source_profile = luke-rbac-user # can use another source profile
After completing the configuration of the profiles with you AWS CLI, you can test the accessibility and operations of each profile with the EKS cluster. Remember to update the
AWS_DEFAULT_PROFILE environment variable when switching between profiles. Connect to the cluster with the command below and fire away!
aws eks update-kubeconfig --name <eks-cluster-name>
I hope the above proves useful to you as you try to achieve optimal security for the API Server in your EKS cluster. As always, happy coding 😃 💻. If you enjoyed the post, feel free to buy me a coffee here.