create a private RDS from EKS using ACK (AWS controller for Kubernetes)

Shi
CI/CD/DevOps
Published in
4 min readJan 20, 2023

To cut long story short, I need to create a private RDS instance for EKS to access; and to do that, I need to create RDS in the same subnet as EKS; and, because my hands are itchy, I decided to do that via ACK (AWS controller for Kubernetes). And, emmm, I think that is probably not a wise decision. :)

However, since I managed to get it done eventually, I decided to pen down the steps so that whoever have to or desperately want to create RDS using ACK might use it as reference.

Dirk Michel wrote a great introduction article on medium about the idea of ACK and how to use it at high level using s3 and RDS as examples.

On Amazon EKS and ACK
Using ACK Service Controllers to provide a Kubernetes developer experience for interacting with AWS Services.medium.com

Step 1: install ACK service controller to EKS using helm; in my case, I need to use ACK to create RDS, so I need to install ack-rds-controller controller.

here is the snippets

# install AWS RDS controller as per 
# https://aws-controllers-k8s.github.io/community/docs/user-docs/install/

# see https://github.com/aws-controllers-k8s/rds-controller/releases for
# latest version of the controller


export SERVICE=rds
export RELEASE_VERSION=v0.1.2
export ACK_K8S_NAMESPACE=ack-system
export AWS_REGION=ap-southeast-1

# authenticate to ECR public registry
aws ecr-public get-login-password --region us-east-1 | \
helm registry login --username AWS --password-stdin public.ecr.aws

# install AWS RDS Controller to EKS using helm
helm install --create-namespace -n $ACK_K8S_NAMESPACE ack-$SERVICE-controller \
oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart \
--version=$RELEASE_VERSION --set=aws.region=$AWS_REGION

# Create an OIDC identity provider for your cluster
export EKS_CLUSTER_NAME=Seeker-EKS-FG-ShiChao
eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --region $AWS_REGION --approve

Step 2: create a ISRA role for EKS service account to use

# as per instructions from https://aws-controllers-k8s.github.io/community/docs/user-docs/irsa/
# create an IRSA role using trust.json
# and attach policy arn:aws:iam::aws:policy/AmazonRDSFullAccess to the role
# then annotate the service account with this newly created IRSA role ARN
# then reboot the rds controller deployment to make it effective

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME \
--region $AWS_REGION --query "cluster.identity.oidc.issuer" \
--output text | sed -e "s/^https:\/\///")

ACK_K8S_SERVICE_ACCOUNT_NAME=ack-$SERVICE-controller

read -r -d '' TRUST_RELATIONSHIP <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}"
}
}
}
]
}
EOF
echo "${TRUST_RELATIONSHIP}" > trust.json

ACK_CONTROLLER_IAM_ROLE="ack-${SERVICE}-controller"
ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK ${SERVICE} controller deployment on EKS cluster using Helm charts"
aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}"
ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text)

BASE_URL=https://raw.githubusercontent.com/aws-controllers-k8s/${SERVICE}-controller/main
POLICY_ARN_URL=${BASE_URL}/config/iam/recommended-policy-arn
POLICY_ARN_STRINGS="$(wget -qO- ${POLICY_ARN_URL})"

INLINE_POLICY_URL=${BASE_URL}/config/iam/recommended-inline-policy
INLINE_POLICY="$(wget -qO- ${INLINE_POLICY_URL})"

while IFS= read -r POLICY_ARN; do
echo -n "Attaching $POLICY_ARN ... "
aws iam attach-role-policy \
--role-name "${ACK_CONTROLLER_IAM_ROLE}" \
--policy-arn "${POLICY_ARN}"
echo "ok."
done <<< "$POLICY_ARN_STRINGS"

# Annotate the service account with the ARN
kubectl describe serviceaccount/$ACK_K8S_SERVICE_ACCOUNT_NAME -n $ACK_K8S_NAMESPACE
export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN
kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN

# restart deployment
kubectl get deployments -n $ACK_K8S_NAMESPACE
kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment ack-rds-controller-rds-chart

# verify the ISRA
kubectl get pods -n $ACK_K8S_NAMESPACE
kubectl describe pod -n $ACK_K8S_NAMESPACE ack-rds-controller-rds-chart-9bbc45bcf-8ww6q | grep "^\s*AWS_"

Step 3: create RDS using ACK

## refer to https://aws.amazon.com/blogs/database/deploy-amazon-rds-databases-for-applications-in-kubernetes/
# and https://medium.com/@micheldirk/on-amazon-eks-and-ack-660fa86cfa7f


# prepare the IRSA role and annotate EKS service account with IRSA role

APP_NAMESPACE=seeker
EKS_CLUSTER_NAME=Seeker-EKS-FG-SC

kubectl create ns ${APP_NAMESPACE}

EKS_VPC_ID=$(aws eks describe-cluster --name="${EKS_CLUSTER_NAME}" \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
EKS_SUBNET_IDS=$(aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=${EKS_VPC_ID}" \
--query 'Subnets[*].SubnetId' \
--output text
)

RDS_SUBNET_GROUP_NAME="seekerdb-subnet-group"
RDS_SUBNET_GROUP_DESCRIPTION="seeker RDS subnet group"

cat <<-EOF > db-subnet-groups.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBSubnetGroup
metadata:
name: ${RDS_SUBNET_GROUP_NAME}
namespace: ${APP_NAMESPACE}
spec:
name: ${RDS_SUBNET_GROUP_NAME}
description: ${RDS_SUBNET_GROUP_DESCRIPTION}
subnetIDs:
$(printf " - %s\n" ${EKS_SUBNET_IDS})
tags: []
EOF

kubectl apply -f db-subnet-groups.yaml

EKS_CIDR_RANGE=$(aws ec2 describe-vpcs \
--vpc-ids $EKS_VPC_ID \
--query "Vpcs[].CidrBlock" \
--output text
)

RDS_SECURITY_GROUP_ID=$(aws ec2 create-security-group \
--group-name "${RDS_SUBNET_GROUP_NAME}" \
--description "${RDS_SUBNET_GROUP_DESCRIPTION}" \
--vpc-id "${EKS_VPC_ID}" \
--output text
)

aws ec2 authorize-security-group-ingress \
--group-id "${RDS_SECURITY_GROUP_ID}" \
--protocol tcp \
--port 5432 \
--cidr "${EKS_CIDR_RANGE}"

RDS_DB_INSTANCE_NAME="seekerdb"
RDS_DB_INSTANCE_CLASS="db.m6i.large"
RDS_DB_STORAGE_SIZE=50
RDS_DB_USERNAME="seeker"
RDS_DB_PASSWORD="xxxx"

kubectl create secret generic -n "${APP_NAMESPACE}" seeker-postgres-creds \
--from-literal=username="${RDS_DB_USERNAME}" \
--from-literal=password="${RDS_DB_PASSWORD}" || true

cat <<-EOF > seekerdb.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
name: ${RDS_DB_INSTANCE_NAME}
namespace: ${APP_NAMESPACE}
spec:
allocatedStorage: ${RDS_DB_STORAGE_SIZE}
autoMinorVersionUpgrade: true
backupRetentionPeriod: 7
dbInstanceClass: ${RDS_DB_INSTANCE_CLASS}
dbInstanceIdentifier: ${RDS_DB_INSTANCE_NAME}
dbName: jira
dbSubnetGroupName: ${RDS_SUBNET_GROUP_NAME}
engine: postgres
engineVersion: "14.5"
masterUsername: ${RDS_DB_USERNAME}
masterUserPassword:
namespace: ${APP_NAMESPACE}
name: seeker-postgres-creds
key: password
multiAZ: false
publiclyAccessible: false
storageEncrypted: true
storageType: gp2
vpcSecurityGroupIDs:
- ${RDS_SECURITY_GROUP_ID}
EOF

kubectl apply -f seekerdb.yaml

RDS_DB_INSTANCE_HOST=$(kubectl get dbinstance -n "${APP_NAMESPACE}" "${RDS_DB_INSTANCE_NAME}" \
-o jsonpath='{.status.endpoint.address}'
)
RDS_DB_INSTANCE_PORT=$(kubectl get dbinstance -n "${APP_NAMESPACE}" "${RDS_DB_INSTANCE_NAME}" \
-o jsonpath='{.status.endpoint.port}'
)

# to validate the DB instance creation
k get dbinstance -A
k describe dbinstance -n seeker

some gotchas:

  • in step 1, we used the env $AWS_REGION for helm install, and this will decide where the RDS will be provisioned in. However, for helm registry authentication, please stick to us-east-1.
  • for every step, you could (and probably should if you are doing it for first time) login to aws console to verify that the resource (e.g. ISRA role) has been created correctly.
  • you could use utility like k9s to monitor what is happening in the EKS cluster and make sure the pod are healthy and annotated properly.
  • if you need to psql to the database instance to initiate some database and tables, you could do so easily with
kubectl run psqlpod --image=postgres -i --tty -- sh 

--

--

Shi
CI/CD/DevOps

I am a coder/engineer/application security specialist. I like to play around with language and tools; I have strong interest in efficiency improvement.