Managing IAM Roles as K8s Resources

Naveen M
keikoproj
4 min readJul 10, 2020

--

Applications deployed in Kubernetes clusters on AWS need IAM roles to manage and access AWS resources. At Intuit, we have 200+ clusters and 8000+ namespaces. Until recently, IAM roles used to be managed separately from Kubernetes resources from a centralized application.

This was working fine but developers had to manually update their IAM policies before they deploy their application. Manual changes required for IAM policy updates created a margin of error where developer sometimes forget to update their IAM policy while promoting it to the next staging environment, and this created a frustration for a lot of developers.

We used to control what permissions should be allowed from our cluster management service but some platform applications needed exceptions to allow more permissive policies. We started baking the exception into the code but soon realized that we needed a more scalable solution.

That is when we came up with the iam-manager. The iam-manager is a k8s CRD (Custom Resource Definition) to manage AWS IAM roles as Kubernetes resources. The iam-manager is open sourced as a part of the Keiko Project and allows applications to safely and conveniently create and manage IAM roles as part of their deployment pipeline (i.e. kubectl apply) along with other Kubernetes resources.

As with any feature that creates or modifies roles and permissions, security is of paramount concern. The iam-manager uses AWS IAM Permission Boundary along with other safeguards to ensure appropriate levels of visibility and control.

AWS IAM Permission Boundary played a major role in terms of security for us since without it, developers could include any IAM permissions in IAMRole spec and an application with an IAM Role having eks:* or ec2:* permissions could destroy the entire cluster. We wanted to control what permissions can be delegated to the roles created by IAM Manager, and Permission Boundary is the perfect choice for us. AWS IAM Permission Boundary is a concept where you can create a boundary with the list of maximum allowed/(or denied) IAM permissions which can be accessed irrespective of what is specified in an IAM Role policy. In short, the actual permissions will be the intersection of the permissions specified in the Permission Boundary and the IAM Role policy.

Let’s take an example where an IAM Role has “AdministratorAccess” policy which means it has pretty much all access but if we attach a Permission Boundary which has only s3:Get* access (which can NOT be changed by the user but only by the Cluster Administrator), that IAM Role can perform only s3:Get* even though the role has “Administrator” permissions. In simple words, permission boundaries are used to limit the permissions that can be delegated by the iam-manager CRD. Check out the AWS IAM Permission Boundary from aws documentation for more info.

We also have webhook to reject IAM role creations if IAMRole spec has a policy which is not included in whitelisted policies which we can configure through config map. This is totally optional as AWS IAM Permission Boundary takes care of excessive permissions but can be used to keep policies very clean.

And last but not least, we needed to be very careful with the IAM Role assigned to iam-manager-controller pod itself and we carefully designed IAM permission policy in a way where it can only do limited tasks. That is, it can not create any IAM role with out attaching a pre-defined permission boundary, can create roles only with pre-defined names (i.e. k8s-*) and also can not delete any roles which doesn’t have pre defined TAGs(we added the TAGs to the roles created only by iam-manager). For more info, please refer the iam-manager IAM policy.

To install and try out iam-manager:

git clone git@github.com:keikoproj/iam-manager.git
cd hack
#update config map according to your requirements. please refer #config map for all the configuration options
vim iammanager.keikoproj.io_iamroles-configmap.yaml
export KUBECONFIG=/Users/myhome/.kube/admin@eks-dev2-k8s
export AWS_PROFILE=admin_123456789012_account
#./install.sh [cluster_name] [aws_region] [aws_profile]
./install.sh eks-dev2-k8s us-west-2 aws_profile

Here is a simple example:

apiVersion: iammanager.keikoproj.io/v1alpha1
kind: Iamrole
metadata:
name: iam-manager-iamrole
spec:
# Add fields here
PolicyDocument:
Statement:
-
Effect: "Allow"
Action:
- "s3:Get*"
Resource:
- "arn:aws:s3:::intu-oim*"
Sid: "AllowS3Access"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "sts:AssumeRole"
Principal:
AWS:
- "arn:aws:iam::XXXXXXXXXXX:role/20190504-k8s-kiam-role"

To submit, kubectl apply -f iam_role.yaml — ns namespace1

Along with creating an IAM role and maintaining the desired state all the time, it also supports creating roles for IRSA (IAM Role for Service Accounts) by adding an annotation
iam.amazonaws.com/irsa-service-account: <service account name>” to the IAM Role Spec.

apiVersion: iammanager.keikoproj.io/v1alpha1
kind: Iamrole
metadata:
name: iam-manager-iamrole-irsa
annotations:
iam.amazonaws.com/irsa-service-account: aws-sa
spec:
# Add fields here
PolicyDocument:
Statement:
-
Effect: "Allow"
Action:
- "s3:Get*"
Resource:
- "arn:aws:s3:::intu-oim*"
Sid: "AllowS3Access"

To summarize, adding iam-manager to a Kubernetes cluster not only provides a safe and convenient solution for AWS IAM role management inside a cluster but also allows applications teams to create an IAM role as part of their deployment pipeline along with other Kubernetes resources via GitOps.

For a complete list of features, please checkout the features section at https://github.com/keikoproj/iam-manager.

Also, Big thanks to Ed Lee, Kshama Jain and other team members for their valuable contribution to iam-manager.

--

--