AKS Terraform CICD with GitLab

Terraform for AKS with Gitlab — Awesome CICD

Hetal Sonavane
DevOps-Journey
3 min readFeb 5, 2021

--

There are many solutions for doing infrastructure provisioning with Terraform; however, in this article, we will focus on setting up AKS Terraform CICD with GitLab. In this article, we shall cover the basics of Terraform and GitLabs CICD process.

Getting started

So let’s create one free GitLab account and add your ssh key from the setting. While GitLab account creating and adding ssh-key is beyond the scope of this document, I shall add a reference to achieve this minor task. In this article, we will create a module for the AKS cluster, also we will create a deployment directory structure to show how deployment works for many different environments.

Gitlab-CI has a `.gitlab-ci.yaml` file which defines the stage of your automation. With Terraform CI configuration one can define some best practices.

Approach

  1. Create different repo structure for different environment or workspace.
  2. Define different repo for your modules. The module is the best way to reuse your code for a different environments. For that use a deployment repo different from your Base Module Repo.
  3. Create backend.tf for remote backend configuration which holds the state file. Restrict access to the repo so we can ensure proper security.
  4. With every repo, add a .gitlabCI file which will run pipeline whenever add changes to the code, so we can make sure it is not breaking the infrastructure.

We will create one Module repo for cluster creation.

Create this type of module that will create the Kubernetes cluster and also you can create the node-pools for the cluster. As shown we will inject the values for the module from our deployment directory.

Now we will create the deployment directory. which will have the below files.

  1. module.tf — which module source we want to call for our cluster creation
module "kci" {
source = "git::ssh://https://gitlab.com/xxx/km?ref=master"

cluster_name = "dev-test-1"
location = "eastus2"
kubernetes_version = "1.17.13"
aks_network_profile_load_balancer_sku = "standard"



aks_subnet_rg_name = "default-rg"
aks_subnet_vnet_name = "default-vnet"
aks_subnet_address_prefix = "xx.xxx.xxx.xx/23"

}

2. vars.tf : Respected environment based values

3. backend.tf. : Which will add backend configuration as shown below. This is very important to save your state file to a remote location. In this, we will save the state file to the storage account of Azure. For storing account details of Azure we can use GitLab-CI variables so we are not directly exposing our credentials.

terraform {
backend "azurerm" {
storage_account_name = "dev-state"
container_name = "dev-test"
key = "test-1"
environment = "public"
}
}

4. provider.tf : This is to define the provider (cloud provider).

provider "azurerm" {
environment = "public"
}

5. .gitlab-ci.yaml : To run a pipeline.

stages:
- preflight
- validate
- plan
- apply
- teardown

variables:
TF_IN_AUTOMATION: "true"
TF_INPUT: "0"

.tf-image:
image:
name: hashicorp/terraform:light
.tf-init:
extends: .tf-image
before_script:
- echo "$GIT_DEPLOY_PRIVATE_KEY" | base64 -d | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-add -l
- echo "$SSH_KNOWN_HOSTS" | base64 -d > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- terraform init

terraform-version:
stage: preflight
extends: .tf-image
script: "terraform -version"

.terraform-validate:
stage: validate
extends: .tf-init
script:
- "terraform validate"


.terraform-plan:
stage: plan
extends: .tf-init
script:
- "terraform plan -out=plan.tfplan "
artifacts:
expire_in: 2d
paths:
- plan.tfplan
- "*.log"

.terraform-apply:
stage: apply
extends: .tf-init
script: "terraform apply plan.tfplan"
allow_failure: false
when: manual
only:
- master

terraform-destroy:
stage: teardown
extends: .tf-init
script: "terraform destroy -auto-approve "
when: manual
dependencies: []
only:
- master

--

--