Working with KIAM Roles in Kubernetes.

I am working on Kubernetes for more than 1.5 years now. It is always important to consider the Security aspects as one of the major concerns in deployments. It is always best to avoid deploying the secrets keys in the configuration files. When it comes to AWS EC2 Roles does the job, for us where We can avoid putting AWS Secrets keys in our deployments. I would like to have a similar kind of configuration in my k8 deployments.

While I was setting up EC2 Roles in my production k8 cluster with the KIAM tool. It was very painful going through so many blogs to make my setup work in the right way. That’s when I thought of writing my own blog about the same to ease the process for my fellow developers.

Prerequisites:

  1. Working K8 Cluster
  2. Cert-manager Installed already.
  3. Master nodes IAM Role
  4. Worker nodes IAM Role
  5. helm (v3.0.0)

Step 1: Create IAM Role as “kiam_server” with following inline policy

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "*"
}
]
}

Step 2: Update the trust relationship for “kiam_server” as following:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam:XXXX:role/masters.cluster.com"

},
"Action": "sts:AssumeRole"
}
]
}

Step 3:

As per KIAM documentation, we have to deploy the KIAM server on master nodes and KIAM agent on worker nodes.

server:
log:
level: info
assumeRoleArn : "arn:aws:iam::XXXX:role/kiam_server"
gatewayTimeoutCreation: "1s"
nodeSelector:
kubernetes.io/role: "master"
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
extraHostPathMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs
readOnly: true
hostPath: /etc/ssl/certs

agent:
log:
level: info
gatewayTimeoutCreation: "1s"
host:
iptables : true
nodeSelector:
kubernetes.io/role: "node"
extraHostPathMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs
readOnly: true
hostPath: /etc/ssl/certs

Install Kiam with above yaml configuration.

helm install stable/kiam -f values.yaml -n kube-system

Once the service is running you can verify it using the below commands.

kubectl get daemonsets --all-namespaces -l app=kiamNAMESPACE     NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR               AGE
kube-system kiam-agent 2 2 2 2 2 kubernetes.io/role=node 24d
kube-system kiam-server 3 3 3 3 3 kubernetes.io/role=master 24d

Step 4:

This part is about testing the above setup by deploying simple app.

In order to Pods to access the EC2 IAM roles with KIAM setup, It needs to annotated the namespaces as follows.

apiVersion: v1
kind: Namespace
metadata:
name: default
annotations:
iam.amazonaws.com/permitted: ".*"

Apply above configuration

kubectl apply -f namespace.default.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: aws-iam-tester
labels:
app: aws-iam-tester
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: aws-iam-tester
template:
metadata:
labels:
app: aws-iam-tester
annotations:
iam.amazonaws.com/role: TEST_ROLE_NAME
spec:
nodeSelector:
kubernetes.io/role: node
nodeName: NEW_NODE_NAME
tolerations:
- key: kiam
value: kiam
effect: NoSchedule
containers:
- name: aws-iam-tester
image: garland/aws-cli-docker:latest
imagePullPolicy: Always
command:
- /bin/sleep
args:
- "3600"
env:
- name: AWS_DEFAULT_REGION
value: us-east-1

Once application is running you can verify

kubectl exec -it POD_NAME /bin/sh
aws sts get-caller-identity #inside pod shell.

This should result as follows

{
"UserId": "AROA4UWMH6F32FKRFWPR3:kiam-kiam",
"Account": "XXXX",
"Arn": "arn:aws:sts::XXXX:assumed-role/TEST_ROLE_NAME/kiam-kiam"
}

Once you see these results, It is good to go ahead and configure for your applications and roll out for other application.

In high traffic environment, There are few issues I have faced especially with restarting the pods, Please consider to customize the resource limits to appropriate values.

Software Engineer , Build Everything required.

Software Engineer , Build Everything required.