Setup EKS Cluster with Pulumi and Helm

Alon Valadji
Israeli Tech Radar
Published in
8 min readDec 14, 2021

TL:DR
Check out the full code at the github repo:
https://github.com/alonronin/rn2021/tree/main/.cluster

Perquisites

We will need to have node, yarn (or npm), pulumi, and lens installed on our computer. Head to the above links and install them.

Set up our infrastructure code

Let’s generate a project with pulumi cli run the following command and we’ll get pulumi’s wizard:

$ pulumi new

Select the aws-typescript from the list and complete the wizard with suppling the project’s name, AWS region and choose dev as our stack:

Now let’s add a couple of dependencies we need for EKS and Kubernetes:

$ yarn add @pulumi/eks @pulumi/kubernetes

Open the index.ts file and let’s generate code for our cluster:

import * as pulumi from '@pulumi/pulumi';
import * as awsx from '@pulumi/awsx';
import * as eks from '@pulumi/eks';
import * as k8s from '@pulumi/kubernetes';
const cluster = new eks.Cluster(projectName, {
instanceType: 't2.medium',
createOidcProvider: true,
});

// Export the cluster's kubeconfig.
export const kubeConfig = cluster.kubeconfig;

const clusterOidcProvider = cluster.core.oidcProvider;

if (!clusterOidcProvider) {
throw new Error('no cluster oidc provider');
}

Now let’s create a provider for our kubernetes cluster:

const provider = new k8s.Provider('k8s', {
kubeconfig: kubeConfig.apply(JSON.stringify),
});

We will need to create some namespaces for ingress-nginx cert-manager and external-dns apps. So let’s create a function for that:

export const createNamespace = (namespace: string, provider: k8s.Provider) =>
new k8s.core.v1.Namespace(namespace, undefined, {
provider,
});

We will also need to create Service Account for cert-manager and external-dns apps, here is the function to that:

export const createServiceAccount = (
name: string,
namespace: k8s.core.v1.Namespace,
clusterOidcProvider: aws.iam.OpenIdConnectProvider,
provider: k8s.Provider,
policyArn: pulumi.Output<string>
) => {
const saAssumeRolePolicy = pulumi
.all([
clusterOidcProvider.url,
clusterOidcProvider.arn,
namespace.metadata.name,
])
.apply(([url, arn, namespace]) =>
aws.iam.getPolicyDocument({
statements: [
{
actions: ["sts:AssumeRoleWithWebIdentity"],
conditions: [
{
test: "StringEquals",
values: [`system:serviceaccount:${namespace}:${name}`],
variable: `${url.replace("https://", "")}:sub`,
},
],
effect: "Allow",
principals: [
{
identifiers: [arn],
type: "Federated",
},
],
},
],
})
);

const saRole = new aws.iam.Role(name, {
assumeRolePolicy: saAssumeRolePolicy.json,
});

// Attach the S3 read only access policy.
new aws.iam.RolePolicyAttachment(name, {
policyArn,
role: saRole,
});

return new k8s.core.v1.ServiceAccount(
name,
{
metadata: {
namespace: namespace.metadata.name,
name,
annotations: {
"eks.amazonaws.com/role-arn": saRole.arn,
},
},
},
{ provider }
);
};

For our ingress-nginx app let’s create a function for that:

export const ingressNginx = (provider: k8s.Provider) => {
const ingressNginxNamespace = createNamespace("ingress-nginx", provider);

return {
ingressNginxNamespace,
};
};

For our external-dns we want to create also a policy with trusted identity, our oidc, and a service account with that policy attached.

Here is the function for that:

import * as k8s from '@pulumi/kubernetes';
import { createNamespace, createServiceAccount } from './utils';
import * as aws from '@pulumi/aws';

export const externalDns = (
clusterOidcProvider: aws.iam.OpenIdConnectProvider,
provider: k8s.Provider,
zoneId: string
) => {
const externalDnsNamespace = createNamespace('external-dns', provider);

const externalDnsPolicy = new aws.iam.Policy('external-dns', {
description: 'External Dns policy',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Action: ['route53:ChangeResourceRecordSets'],
Resource: [`arn:aws:route53:::hostedzone/${zoneId}`],
},
{
Effect: 'Allow',
Action: ['route53:ListHostedZones', 'route53:ListResourceRecordSets'],
Resource: ['*'],
},
],
}),
});

const externalDnsServiceAccount = createServiceAccount(
'external-dns',
externalDnsNamespace,
clusterOidcProvider,
provider,
externalDnsPolicy.arn
);

return {
externalDnsNamespace,
externalDnsPolicy,
externalDnsServiceAccount,
};
};

We also need to do the same for cert-manager however the policy is a little bit different:

import * as k8s from '@pulumi/kubernetes';
import { createNamespace, createServiceAccount } from './utils';
import * as aws from '@pulumi/aws';

export const certManager = (
clusterOidcProvider: aws.iam.OpenIdConnectProvider,
provider: k8s.Provider,
zoneId: string
) => {
const certManagerNamespace = createNamespace('cert-manager', provider);

const certManagerPolicy = new aws.iam.Policy('cert-manager', {
description: 'Cert manager policy',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Action: 'route53:GetChange',
Resource: 'arn:aws:route53:::change/*',
},
{
Effect: 'Allow',
Action: [
'route53:ChangeResourceRecordSets',
'route53:ListResourceRecordSets',
],
Resource: `arn:aws:route53:::hostedzone/${zoneId}`,
},
{
Effect: 'Allow',
Action: 'route53:ListHostedZonesByName',
Resource: '*',
},
],
}),
});

const certManagerServiceAccount = createServiceAccount(
'cert-manager',
certManagerNamespace,
clusterOidcProvider,
provider,
certManagerPolicy.arn
);

return {
certManagerNamespace,
certManagerPolicy,
certManagerServiceAccount,
};
};

And last we can invoke each of them as following:

export const { ingressNginxNamespace } = ingressNginx(provider);

export const { certManagerNamespace, certManagerServiceAccount } = certManager(
clusterOidcProvider,
provider,
zoneId
);

export const { externalDnsNamespace, externalDnsServiceAccount } = externalDns(
clusterOidcProvider,
provider,
zoneId
);

Let’s run pulumi cli to do its job, so in the terminal we will run:

$ pulumi up

You can review the changes before applying them or cancel it.

Now our cluster is in the air, and we can take our kubeconfig from pulumi’s output and connect to cluster with Lens. Just click File -> Add Cluster from the menu and paste the kubeconfig.

Add Helm Repos in Lens

Open Lens’ preferences and go to Kubernetes Tab and scroll down to Helm Charts:

Then click Add Custom Helm Repo button and add the repos. Here are the Repos Urls:

Ingress Nginx

https://kubernetes.github.io/ingress-nginx

Cert Manager

https://charts.jetstack.io

External Dns

https://kubernetes-sigs.github.io/external-dns/

Installing Ingress-Nginx

Open lens and go to Apps -> Charts tab:

Click on ingress-nginx and hit Install:

Select the namespace created by Pulumi and click Install again:

Now we can see we have it installed in Apps -> Releases tab in Lens:

Installing External Dns

Again from the Apps -> Charts tab click external-dns hit install:

At the values.yaml that opened scroll to serviceAccount and provide the service account we’ve created with pulumi to the name field and change create to false.

Now click on the install button.

Go to Apps -> Releases tab and see that it was installed correctly:

You can go to Workloads -> Pods and click on the external-dns pod and see that it pass the correct arn to the pod with environments variables:

You can click on the Pods Logs icon at the top and also see that everything works fine.

Installing Cert Manager

Go to Apps -> Charts tab and select cert-manager, click install:

Select the cert-manager namespace we created with pulumi:

Now in the values.yaml that opened go to serviceAccount and change create to false and supply the service-account we created with pulumi to the name field:

Scroll down to securityContext and comment everything and uncomment the section that has fsGroup: 1001 and runAsUser: 1001 and remove the enabled: false field.

Now click install and check at the Apps -> Releases tab that everything is installed correctly:

You can check out Workloads ->Pods tab and click at the cert-manager first pod (not the injector or the webhook) and see that we have the correct arn pass to the Pod’s environment variables:

Now we need to create a ClusterIssuer that can connect to the LetsEncrypt api server and configure the DNS01 challenge.

Just click the + button at the bottom and create a yaml resource:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: user@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: example-issuer-account-key
solvers:

# example: cross-account zone management for example.com
# this solver uses ambient credentials (i.e. inferred from the environment or EC2 Metadata Service)
# to assume a role in a different account
- selector:
dnsZones:
- "example.com"
dns01:
route53:
region: us-east-1
hostedZoneID: DIKER8JEXAMPLE

Deploy an Application

Let’s deploy an example app to see how it all works. Open the Create Resource tab in Lens (the + button) and Just copy paste and run each of the following:

First we create our Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Now let’s create a Service:

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80

And the magic happens with our Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- '*.example.com'
secretName: root-tls
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
host: nginx.example.com

Now we can check our Pods in the default namespace and see our deployment:

Our Services at Network -> Services tab:

And our Ingress at the Ingresses tab:

Let’s check that our certificate is ready at the Custom Resources -> cert-manager.io -> Certificates tab:

Now open the url in your browser and see that all works.

Like always, share with me your thoughts and suggestions at the comments section 😃.

--

--

Alon Valadji
Israeli Tech Radar

Alon is a leading software architect well-versed in Rozansky and Woods’ architectural methodologies. As a full-cycle JavaScript and web architecture torchbearer