Automated Test Environment for Azure Kubernetes Service (AKS) Applications — using kind

Prabal Deb
Microsoft Azure
Published in
4 min readAug 25, 2020

Test automation is a key aspect of product quality/first-to-market/availability. But executing automated tests in each stage of modern SDLC in a cost optimised way, is equally important and we need to keep that in mind!

If you are a SRE or part of DevOps team, I am sure you have come across this types of conversation very often. It’s a challenge to make sure that we have the required infrastructure resources for performing different types of automated tests (PR Validation/Merge/Sanity/Regression/Integration/etc.)

I am not going to talk about anything new/magic stuff that may solve the challenge. I will be elaborating an easy/re-usable/scalable/cost-optimised framework (basically kind of glue that put together couple of existing tools) that enables auto-provisioning of test environments to test Azure Kubernetes Service (AKS) based applications.

About

This is a Framework to create Automated Test Environment using kind for testing Azure Kubernetes Service (AKS) based applications in CI (Continuous Integration) Pipeline (Azure DevOps), where required dependencies/infrastructure will be provisioned for executing automated tests and de-provisioned after completion.

Testing of the applications to be deployed over Kubernetes means either the Cluster is already available or is to be deployed on the go before you test, in former case its a cost to keep the Cluster live and in later it takes a lot of time/complexity to bring up/down the Kubernetes cluster.

This framework uses kind which itself was primarily designed for testing Kubernetes itself. kind is often used by developers to test their applications in local dev environments and can very well be used for automated testing.

Tools Used

  1. kind — a tool for running local Kubernetes clusters using Docker container “nodes”
  2. helm — the package manager for Kubernetes
  3. kubectl — command line tool to control Kubernetes clusters
  4. bash — the GNU Project’s shell

Features

This is a very simple framework, that creates Automated Test Environment to enable automated testing of applications hosted in Azure Kubernetes Service (AKS) with the following features -

  1. Create/Delete KIND cluster in CI environment
  2. Optional — Azure Key Vault Provider for Secrets Store CSI Driver installation and configuration
  3. Optional — Azure Container Registry (ACR) Image Pull Secret
  4. Install helm charts of application (values needed for integration test environment can be overwritten easily)
  5. Validate if the respective Kubernetes pods are up and running (multiple pods can be provided that are having selector - app.kubernetes.io/name)
  6. Port-Forward the respective Kubernetes services needed to perform integration testing (multiple services can be provided and the respective local ports will be starting from 8080 to 808[number of services] maintaining the order as provided)

Getting Started

This framework contains the following script and their options -

  1. start.sh: Download all the dependencies and create kind cluster -
# Usage: bash -f ./start.sh
# Supported Options -
# --kind-cluster-name=<kind Cluster Name> (default INTEGRATION_TEST_CLUSTER)
# --kind-version (default v0.7.0)
# --kubectl-version (default v1.18.0)
# --helm-version (default v3.2.0)
  1. deploy.sh: Deploy/Port-Forward application helm chart and enables optional features -
# Usage: bash -f ./deploy.sh
# Supported Options -
# --csi-driver-enabled=<yes/no> (default no, if yes provide following two parameters)
# --csi-driver-sp-client-id=<Azure Service Principle ID, having access to Azure Key Vault>
# --csi-driver-sp-client-secret=<Azure Service Principle Secret, having access to Azure Key Vault>
# --acr-imagepullsecret-enabled=<yes/no> (default no, if yes provide following three parameters)
# --acr-imagepullsecret-sp-client-id=<Azure Service Principle ID, having access to Azure Container Registry>
# --acr-imagepullsecret-sp-client-secret=<Azure Service Principle Secret, having access to Azure Container Registry>
# --acr-full-name=<Azure Container Registry full name ex. example.azurecr.io>
# --helm-chart-path=<Helm Chart Folder Path or URL to .tgz file for the applications >
# --helm-chart-release-name=<Helm Release Name>
# --helm-chart-set-parameters=<","(comma) separated Helm Set parameters needed to be overwritten for integration test env>
# --kubectl-check-services=<","(comma) separated Pod names needed to be check if up and running>
# --kubectl-check-services-selector-label=<ex. app.kubernetes.io/name or name etc.> (default app.kubernetes.io/name)
# --kubectl-port-forward-services=<","(comma) separated Service names needed to port-forward for testing>
  1. stop.sh: Delete kind cluster -
# Usage: bash -f ./stop.sh
# Supported Options -
# --kind-cluster-name=<kind Cluster Name> (default INTEGRATION_TEST_CLUSTER)

Demo

For demonstrating the framework following setup has been used -

Expected Outcome

It is expected to get Integration Test executed from CI Pipeline independently without disturbing existing development. The below images represents the outcome of the sample for demonstrating the framework -

Resources

Some additional points need to be considered -

  • While using Private Endpoint enabled Azure Key Vault (AKV) or Azure Container Registry (ACR) for applications, make sure the to use CI Pipeline Agent deployed in the same subnet where AKV or ACR endpoints are enabled

References

Team

Brij Raj Singh, Ankit Sinha & Prabal Deb

Originally published at https://github.com.

--

--