Deploying a cloud-native application on AWS using Terraform

Sanket Bengali
3 min readJun 19, 2019

--

Images : AWS, Terraform, Kubernetes, Helm

A large-scale, cloud-native, distributed system may include multiple AWS services :

  1. Authentication and authorization : IAM
  2. Networking : VPC
  3. Storage : EBS (for Persistent volume) and S3 (for tfstate-backend)
  4. Kubernetes cluster : ECR (for container registry), EKS (for managed Kubernetes), EC2 (for Kubernetes worker nodes — auto-scaling group)
  5. Additional stateful services like EFS (for File-system), RDS (for managed Database), Elasticsearch, that are outside Kubernetes cluster. (At the time of writing this story, while Cloud-native storage solutions are still at very early stage and might take time to be matured, the statful services are considered outside Kubernetes cluster.)

Automating the deployment of such large-scale application in a completely isolated, multi-tenant environment for multiple customers on a public cloud like AWS can be challenging.

Terraform provides an abstraction to deploy and provision various resources on large number of providers including AWS. Full list is here.

Terraform uses Graph Theory to Infrastructure as Code which makes it a powerful tool to automate complex Infrastructure deployment and management workflows.

Cloudposse have a huge list of open-sourced Terraform modules for AWS.

Those modules can be used as plug-and-play to create and manage various AWS resources for an application.

Disclaimer : The content mentioned in this story is advanced level that require knowledge on AWS services (distributed systems), Terraform, Kubernetes, Helm.

For AWS services mentioned above, below Terraform modules from Cloudposse can be used :

  1. Tfstate backend
  2. Label
  3. VPC
  4. Subnets
  5. EFS
  6. EKS cluster
  7. EKS workers (including EC2 auto-scaling group)
  8. Elasticsearch

Below is a high-level flow diagram illustrating modules and resources deployment using Terraform providers AWS, Kubernetes and Helm.

Note : This diagram is mainly focused on Kubernetes and Helm configuration and deployments. It does not include common modules and resources like Label, VPC, subnets, security groups, IAM roles etc.

Here is the GitHub link to download a sample solution that includes an additional module that is used to setup and configure Kubernetes after EKS cluster is deployed, and then install Helm charts on top of that :

In the Terraform code for deploying a complete application using single terraform apply on AWS, these modules are executed (parallel or sequential, based on their dependencies) from a “root module” (for ex. “my_app.tf”).

The flow goes like this (after initial modules like labels, VPC, subnets etc. are completed) :

1. EFS, EKS cluster and Elasticsearch deployment starts in parallel, since there is no dependency on them.

  • EKS cluster and Elasticsearch domain usually take long time depending on the size (number and type of instances).

2. After EKS cluster completion, EKS worker nodes are created using auto-scaling group.

  • Here, the worker nodes need to run bootstrap script to be added into the EKS cluster (using cluster endpoint and CA data).

3. After auto-scaling group completion, Kubeconfig file is generated, which can be used in Kubernetes provider.

4. Define Terraform Kubernetes provider with any of the 3 ways mentioned.

5. Once the Kubernetes provider is defined, and auto-scaling group is ready, config map is applied for AWS authentication.

  • This step registers the EC2 instances in the Kubernetes cluster and makes them “Ready” to be used as worker nodes.

6. Once the worker nodes are “Ready”, service account for Tiller and application namespace can be created in parallel, since there is no dependency on them :

7a. After service account creation, it needs to be bound as “cluster-admin” ClusterRole.

7b. depending on the Elasticsearch domain completion, elasticsearch service can be deployed in the application namespace.

8. After service account cluster role binding, Helm provider is defined, which installs tiller pod inside kube-system namespace.

  • This is equivalent to “helm init” command and on completion of which, Helm charts can be deployed on the Kubernetes cluster.

9. Once Helm init is complete, Helm charts can be deployed either from remote repository or local path.

--

--

Sanket Bengali

Passionate about Automation, Orchestration and Systems Integration across industry verticals