KubeStack — A Must Use

An Open-Source, Easy-to-use GitOps Framework

Shanika Perera
Geek Culture
5 min readJun 10, 2021

--

Automation is the new trend. Automation has helped and influenced many industries but, in my opinion, the IT industry has gained the most benefit out of it. Within this tremendous topic of automation, GitOps has taken a lot of attention from Full Stack engineers to SREs and DevOps professionals. GitOps is a method of applying continuous deployment for cloud-native applications. It is a way of deploying an application using developer-friendly methods such as Git and Continuous deployment.

There is nothing preferable than to automatically deploy the configurations stored in a Git repository whenever there is a change in it. If you are a developer who is working with Kubernetes, you must have come across the difficulties of deploying your application from scratch repeatedly. Helm Charts eased some of those pain points but did not introduce picture-perfect automation out of the box. Luckily for you, there is a new open-source framework that will help you build infrastructure automation for Kubernetes (K8s). KubeStack is an open-source Terraform framework for teams that want to automate infrastructure.

Image Source — https://github.com/kbst (Authorized by Owner)

KubeStack is a framework that integrates well with the Terraform ecosystem. KubeStack supports Terraform modules and providers for Kustomize and Terraform integration. KubeStack integrates with major cloud providers such as Amazon Web Services, Google Cloud Platform, and Azure. It uses the respective cloud’s Terraform providers to manage K8s clusters which are EKS for Amazon, AKS for Azure, and GKE for Google. Cluster infrastructure and cluster services are defined using Terraform modules in this framework.

Another great feature of KubeStack is that it is an open-source framework. There are features that set it apart from the competition and make it the most preferred automation framework. Features like providing a stable GitOps workflow for the team, facilitating tested Terraform modules, the ability to reuse tested Terraform modules, Increased speed in application development makes automation smoother and more efficient. It is the ideal solution because it is simple to integrate KubeStack with multiple cloud-K8s providers like EKS, AKS, and GKE.

Image by Author

KubeStack was extremely easy for me to follow and install because of its documentation and tutorials. KubeStack documentation is well written and easy to follow even for a beginner. I tried the KubeStack framework with the AWS platform. The commands and the step-by-step guide tutorials were helpful for me to follow.

KubeStack installation consists of three main steps.

  1. Develop locally
  2. Provision Infrastructure
  3. Set-up automation

I will go through these steps and explain my experience with KubeStack.

Develop locally

This is one of the best elements in KubeStack in my opinion. The developer can simulate the configuration in the local environment. How great is that? If you are working with multiple environments like Dev, QA, and Prod, I am certain that this feature will come in handy. You could simulate the configuration changes you did in the specific environments for the cloud in your local machine. Therefore, you do not have to worry about errors that might occur when deploying your changes to the cloud.

You can reduce the cost of interacting with the cloud provider to troubleshoot and fix your misconfigurations by the leveraged localhost development. The local environment is created by using a technique called KinD (Kubernetes in Docker) where K8s nodes run as Docker containers locally.

As you can see, KubeStack is also fast and efficient. It only took 112.5 seconds to build my configurations locally at the initial stage.

Provision Infrastructure

This is the second part of the tutorial. If you are satisfied with the local changes, in this step you can move it to your preferred cloud provider infrastructure. There is a special container included in KubeStack which is used for bootstrapping. This container provides facilities like other cloud provider CLI tools.

In this phase, you get to create the GitOps repository. You need to set up authentication methods and the remote state of your Terraform configuration to do this. Two Terraform workspaces are created in this phase called apps and ops. The remote state configurations are applied in both workspaces in this phase. I followed along with the tutorials and it was very easy for me to set this up. These workspaces create AWS resources such as;

  • VPC and subnet configurations
  • Route table and RT-associations
  • Internet Gateway
  • Route53 Hosted Zone and respective Routes53 records
  • Security groups
  • IAM roles, policies, and instance profile
  • EKS cluster — Cluster services, Node groups
  • Kustomization resources —

After applying the configuration changes, the changes need to be committed to git to follow along in the GitOps process.

Set-up automation

This is by far my favorite step of the KubeStack workflow. In this phase, the automation process is created by adding the pipeline and implementing the GitOps workflow.

I created a new repository called “infrastructure-automation” and pushed the local repository to it. After setting the credentials for the pipeline, creating the pipeline for the GitOps flow was just as easy as clicking a button. KubeStack tutorials contain the pipeline file and all I had to do was to follow the commands in the tutorial.

After pushing the changes to the remote repo, the pipeline changes can be shown when a pull request is made.

And Just like that, I was able to deploy K8s clusters in AWS in a short amount of time.

Overall, my experience with KubeStack was very satisfactory. The process took more than one hour, I was able to deploy within two minutes with the GitOps workflow. If you link your repository to trigger automated pipeline runs, you will be able to save a lot of time for your deployments. KubeStack is an excellent automation framework tool for K8S projects. I recommend this to any SRE and DevOps engineering colleagues to try this framework and perceive how it eases your day-to-day ops work.

--

--

Shanika Perera
Geek Culture

Masters in Cyber Security (Reading) @ Deakin University | Ex-WSO2 | CKA | AWS SysOps Administrator | HashiCorp Certified Terraform Associate