How to organize AWS EKS

Kuma Yang
4 min readJun 18, 2023

--

The process of deploying a project to AWS EKS is best followed in the following order:
How to organize AWS EKS
Install Argo CD to deploy services on AWS EKS
How to set the AWS secret manager with AWS EKS
Connecting GitHub and ArgoCD and manage service
Managing environment variables in AWS EKS.

I am currently working as a backend engineer at a startup. Since it’s a startup, we cannot afford to hire a DevOps engineer, so we have to manage the infrastructure ourselves. In the case where backend engineers who haven’t hired a DevOps engineer yet need to operate AWS EKS (Elastic Kubernetes Service) themselves, I will explain how to set up and manage AWS EKS on several pages.

Set up and provide a description of the impact of each environment setting.

First of all, you choose the region for your EKS environment.

aws configure

I’ll use the “Amazon EKS Sample VPC — Private and Public subnets” for EKS environment, so I made the vpc environment to use cloud formation. You can reference to AWS Creating A VPC documents.

$ aws cloudformation create-stack \
--stack-name {your eks stack name} \
--template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml

When you saw the “CREATE_COMPLETED” in CloudFormation service, you can see the vpc as the name is {your esk stack name}-VPC in vpc service. You can also 4 subnets (2 private subnet, 2 public subnet) on Subnets tab.

Now, you set the vpc environment for EKS and ready to make eks cluster. One more thing, you have to install eksctl, (You can install eksctl by following the instructions provided in this documentation)

To create a cluster, first, you need to create a cluster configuration file.

cluster.yaml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: {your cluster name}
region: us-west-1

vpc:
id: {your vpc id}
cidr: '192.168.0.0/16'
subnets:
public:
us-west-1a:
id: {your public subnet id-1}
us-west-1b:
id: {your public subnet id-2}
private:
us-west-1a:
id: {your private subnet id-1}
us-west-1b:
id: {your private subnet id-2}

nodeGroups:
- name: {your node group name, ex: xxxxx-private-workers}
instanceType: t3.medium
desiredCapacity: 2
privateNetworking: true

Create cluster.

$ eksctl create cluster -f ./cluster.yaml

When you look at the final result, it appears as follows.

2023-06-08 16:07:58 [ℹ]  kubectl command should work with "/Users/kuma/.kube/config", try 'kubectl get nodes'
2023-06-08 16:07:58 [✔] EKS cluster "your cluster name" in "us-west-1" region is ready

Let’s try running “kubectl get nodes” as indicated in the result.

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-xxx-xxx-xxx-xxx.us-west-1.compute.internal Ready <none> 3m25s v1.24.13-eks-0a21954
ip-yyy-yyy-yyy-yyy.us-west-1.compute.internal Ready <none> 3m27s v1.24.13-eks-0a21954

In Amazon Elastic Kubernetes Service (EKS), you can verify the creation of the cluster you made under EKS > Clusters. In the future, to deploy AWS Load Balancer and Argo CD, we will use separate namespaces for each. Let’s register a namespace in advance for deploying services.

namespace.yaml

---
apiVersion: v1
kind: Namespace
metadata:
name: {your service namespace}
annotations:
iam.amazonaws.com/permitted: 'arn:aws:iam::xxxxxxxxxxxx:role/.*'

Create namespace.

$ kubectl apply -f ./namesapce.yaml 

Now let’s add the aws-load-balancer-controller for external communication.

You can refere to aws reference documents.

First of all, we download the iam policy json file

$ curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json

Second, create policy

$ aws iam create-policy \
--policy-name AWSLoadBalancerControllerXXXXXXXXPolicy \
--policy-document file://iam-policy.json

the result similarly below as

{
"Policy": {
"PolicyName": "AWSLoadBalancerControllerXXXXXXXXPolicy",
"PolicyId": "XXXXX",
"Arn": "arn:aws:iam::111111111111:policy/AWSLoadBalancerControllerXXXXXXXXPolicy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2023-06-08T07:34:07+00:00",
"UpdateDate": "2023-06-08T07:34:07+00:00"
}
}

Copy the “arn:aws:iam::111111111111:policy/AWSLoadBalancerControllerXXXXXXXXPolicy” value.

Associate iam oidc provider

$ eksctl utils associate-iam-oidc-provider --region=us-west-1 --cluster={your cluster name} --approve

create iam service account

$ eksctl create iamserviceaccount \
--cluster={your cluster name} \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn={the value you taken avobe from policy} \
--approve

When you look at the final result, it appears as follows.

created serviceaccount "kube-system/aws-load-balancer-controller"

To install the aws-load-balancer-controller using Helm

$ helm repo add eks https://aws.github.io/eks-charts
$ kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName={your cluster name} --set serviceAccount.create=false --set serviceAccount.name={the name when you create the iam service account}

the result:
NAME: aws-load-balancer-controller
LAST DEPLOYED: Thu Jun 8 16:44:29 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!

Let’s verify if it has been created successfully.

$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 38s

Now let’s install Argo CD and operate the actual service.

You can see the contents on Install Argo CD deploy services on AWS EKS.

--

--