Add EFS CSI Drivers to your EKS Kubernetes Cluster using Terraform with Helm provider

Stefano Monti
AWS Infrastructure
Published in
7 min readApr 9, 2023

If you are looking for an automatic way to create an AWS EFS CSI Driver that runs inside your EKS Cluster from the moment you deploy the cluster itself, you are in the right place!

In this guide, we will use Terraform for creating all the AWS resources we need in order to have an EKS cluster up and running and a dedicated VPC.
All the nodes will run inside the private subnets of the newly created VPC.

Let’s begin to download the code from the repository I created on my GitHub profile. You can find it here.

The first step is to create the VPC and the EKS Cluster; I have already discussed in different posts about this argument. You can find all the references in the following articles:
Terraform prerequisites: https://medium.com/aws-infrastructure/aws-vpc-provided-with-terraform-8f9012f6ef39
VPC infrastructure: https://medium.com/aws-infrastructure/create-aws-vpc-infrastructure-with-terraform-308afed9fe31
EKS Cluster: https://medium.com/@stefano.monti02/setup-kubernetes-cluster-with-aws-eks-and-terraform-c46d5e916ad9

In the above posts, you can read all the instructions to have all the prerequisites for proceeding with the reading of this article.

We begin with the following Terraform file:

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "aws" {
region = "eu-west-1"
}

data "aws_availability_zones" "available" {
state = "available"
}


module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "4.0.0"

name = "stw-vpc"
cidr = "10.0.0.0/16"

azs = data.aws_availability_zones.available.names
private_subnets = ["10.0.0.0/22", "10.0.4.0/22", "10.0.8.0/22"]
public_subnets = ["10.0.100.0/22", "10.0.104.0/22", "10.0.108.0/22"]

enable_nat_gateway = true
single_nat_gateway = true
enable_vpn_gateway = false
}


locals {
vpc_id = module.vpc.vpc_id
vpc_cidr = module.vpc.vpc_cidr_block
public_subnets_ids = module.vpc.public_subnets
private_subnets_ids = module.vpc.private_subnets
subnets_ids = concat(local.public_subnets_ids, local.private_subnets_ids)
}



################
# EKS MODULE #
################

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.0"

cluster_name = "stw-cluster"
cluster_version = "1.24"

cluster_endpoint_public_access = true

cluster_addons = {
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
before_compute = true
service_account_role_arn = module.vpc_cni_irsa.iam_role_arn
configuration_values = jsonencode({
env = {
# Reference docs https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
ENABLE_PREFIX_DELEGATION = "true"
WARM_PREFIX_TARGET = "1"
}
})
}
}

vpc_id = local.vpc_id
subnet_ids = local.private_subnets_ids
control_plane_subnet_ids = local.private_subnets_ids

# EKS Managed Node Group(s)
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
instance_types = ["t3.medium"]
iam_role_attach_cni_policy = true
}

eks_managed_node_groups = {
stw_node_wg = {
min_size = 2
max_size = 6
desired_size = 2
}
}

}




################################
# ROLES FOR SERVICE ACCOUNTS #
################################

module "vpc_cni_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "~> 5.0"

role_name_prefix = "VPC-CNI-IRSA"
attach_vpc_cni_policy = true
vpc_cni_enable_ipv4 = true

oidc_providers = {
main = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["kube-system:aws-node"]
}
}
}

If you need clarification about the resources in this file check this post (the second link above).

Now let’s add what we need in order to have EFS CSI Drivers up and running in your Cluster.

Provider


provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
command = "aws"
}
}
}

The code describes a Terraform provider configuration for Helm, a Kubernetes package management, and it is written in the Hashicorp Configuration Language (HCL).

The connection to a Kubernetes cluster is established by the Kubernetes block in the provider configuration. The Kubernetes cluster is maintained by an AWS Elastic Kubernetes Service (EKS) module that is configured in the same Terraform code.

The cluster endpoint output of the EKS module, which is the endpoint for the Kubernetes API server, is the host attribute’s default value.

The output of the EKS module, cluster certificate authority data, which is the base64-encoded certificate authority (CA) data for the Kubernetes cluster, is used as the value for the cluster ca certificate property. This is used to confirm the Kubernetes API server’s identity.

An Amazon CLI command is used in the exec block to configure a Kubernetes authentication method and receive an authentication token. The Kubernetes authentication API version to use is specified by the api version element, and the AWS command arguments are specified by the args attribute. In this instance, the cluster name of the EKS module is included in the args array.

Overall, this configuration enables the Helm provider to utilize Amazon credentials to connect to the Kubernetes cluster run by the EKS module, and to install, upgrade, and manage Kubernetes applications using Helm.

Helm resource

resource "helm_release" "aws_efs_csi_driver" {
chart = "aws-efs-csi-driver"
name = "aws-efs-csi-driver"
namespace = "kube-system"
repository = "https://kubernetes-sigs.github.io/aws-efs-csi-driver/"

set {
name = "image.repository"
value = "602401143452.dkr.ecr.eu-west-3.amazonaws.com/eks/aws-efs-csi-driver"
}

set {
name = "controller.serviceAccount.create"
value = true
}

set {
name = "controller.serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = module.attach_efs_csi_role.iam_role_arn
}

set {
name = "controller.serviceAccount.name"
value = "efs-csi-controller-sa"
}
}

This Terraform resource block, created using the HashiCorp Configuration Language (HCL), explains how to install a Helm chart on a Kubernetes cluster for the AWS Elastic File System (EFS) Container Storage Interface (CSI) driver.

The resource’s name is AWS EFS CSI driver, and its resource type is helm release. The namespace option defines the Kubernetes namespace where the resources should be produced, the name parameter sets the name of the release, and the chart parameter specifies the name of the Helm chart being distributed. The chart’s source is indicated by the repository parameter.

Using the set block, the resource additionally configures a number of Helm chart settings. Each set block’s name parameter defines the configuration value’s key, and the value parameter specifies the value being set.

The Amazon EFS CSI driver’s Docker image repository is set as the Helm chart’s image.repository configuration variable in the first set block.

The second set block establishes a service account and sets the create parameter to true for the EFS CSI controller.

The final set block changes the value supplied by the attach efs csi role.iam role arn module for the annotation eks.amazonaws.com/role-arn on the newly formed service account.

The name parameter of the newly established service account is finally set to efs-csi-controller-sa in the fourth set block.

Role for EFS CSI Service Account


module "attach_efs_csi_role" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"

role_name = "efs-csi"
attach_efs_csi_policy = true

oidc_providers = {
ex = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["kube-system:efs-csi-controller-sa"]
}
}
}

This Terraform module creates an Amazon IAM role with the name “efs-csi” and adds a policy to it in HCL (HashiCorp Configuration Language). The policy for the Container Storage Interface (CSI) driver for the Elastic File System (EFS) allows Kubernetes clusters to use EFS as persistent storage.

This module is derived from the pre-built Terraform module “terraform-aws-modules/iam/aws/modules/iam-role-for-service-accounts-eks” that is listed in the Terraform registry. For Kubernetes service accounts operating on Elastic Kubernetes Service (EKS) clusters, it makes it easier to create IAM roles.

The OpenID Connect (OIDC) identity provider for the role is specified in the oidc providers block. The ARN (Amazon Resource Name) of the OIDC provider is specified by the provider arn option. The Kubernetes namespace and service account name that can take on this IAM role are specified by the namespace service accounts option. In this scenario, the “efs-csi” IAM role can be assumed by the service account “efs-csi-controller-sa” in the “kube-system” namespace.

In general, this module establishes an IAM role with a specified name, associates an EFS CSI policy with it, and specifies the Kubernetes service account that can take on this role. Creating IAM roles for service accounts on EKS clusters that require access to EFS storage is made easier as a result.

EFS File System


resource "aws_security_group" "allow_nfs" {
name = "allow nfs for efs"
description = "Allow NFS inbound traffic"
vpc_id = local.vpc_id

ingress {
description = "NFS from VPC"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = [local.vpc_cidr]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

}


resource "aws_efs_file_system" "stw_node_efs" {
creation_token = "efs-for-stw-node"
}


resource "aws_efs_mount_target" "stw_node_efs_mt_0" {
file_system_id = aws_efs_file_system.stw_node_efs.id
subnet_id = module.vpc.private_subnets[0]
security_groups = [aws_security_group.allow_nfs.id]
}

resource "aws_efs_mount_target" "stw_node_efs_mt_1" {
file_system_id = aws_efs_file_system.stw_node_efs.id
subnet_id = module.vpc.private_subnets[1]
security_groups = [aws_security_group.allow_nfs.id]
}

Amazon resources related to the EFS (Elastic File System) and security group can be created using the Terraform configuration file at this location.

The following resources are created by the configuration:
— a “allow nfs” AWS security group that permits all outward traffic and NFS inbound traffic on port 2049 from a particular VPC CIDR block.
— a file system called “stw node efs” for Amazon.
— Two Amazon EFS mount targets with the “allow nfs” security group, each connected to a separate private subnet of the VPC.
— The VPC ID and CIDR block are two examples of local variables that are defined by the setup. The EFS mount targets are connected to particular private subnets of the VPC, and the EFS file system is recognized by a creation token.

Setting up an EFS file system in Amazon and defining the required security group to permit NFS communication to and from the EFS file system are both possible with this configuration.

That’s all: now’s the time to run, test and destroy the application.

To run the application, follow the same instructions I provided here.
To test the application, follow this link directly from the official AWS Doc.
To save money remember to destroy the application if you don’t use it with:

$ terraform destroy

Thanks for reading

Bye 😘

--

--