Ionut Craciunescu
Aug 26, 2018 · 5 min read
let cluster = bash (terraform, ansible, kops) => { make it happen };

There are many ways and tools that can be used to build a Kubernetes cluster.
Starting from building it from scratch, to tools like kops, kube-aws, kubicorn, or using hosted clusters like GKE or EKS. That’s already a lot of options; which one should you choose? This post I will describe what works like a treat for us, at Wealth Wizards, so we have a repeatable, reliable and fast process for building a Kubernetes cluster. In less than 10min we can get a whole cluster stood up, have everything in place (like logging, security, namespaces configs etc.) and deploy services to it.

AWS is our cloud provider and when we started with Kubernetes, kube-up was to tool to use for building clusters. Since things are always changing and we wanted a better way to build clusters, soon after joining Wealth Wizards I started exploring kube-aws as a replacement to kube-up at team’s suggestion. Initial testing was all good but we could not continue using it because we had to run some software on every node of the cluster that had to be installed through either an .rpm or .deb and kube-aws was using CoreOS. So the next tool in line was kops.

Kops is a very easy tool to use and get started with. If you just wanna build a cluster, once you’ve decided on a name and an S3 bucket to hold the cluster state, you only have to run

kops create cluster \
--zones us-west-2a \
--name myfirstcluster.example.com \
--state s3://prefix-example-com-state-store

For more details on kops create see kops_create.md.

This command is great for one time run, but for a repeatable process, storing the whole cluster config in code is preferable. kops create -f FILENAME [flags] command to the rescue!! Basically you can store the cluster config in a nice formatted YAML file, run kops create followed by kops update and job done! You can configure almost everything in the YAML file, but you should stick just to the options that you really need to configure and let kops manage everything else for you. One example is the Docker version that is installed on the nodes, unless you really need to manage this, don’t add it to your YAML file. See this page for a nice YAML example: https://github.com/kubernetes/kops/blob/master/docs/apireference/examples/cluster/cluster.yaml

So far I have this:

  • I’ll build my cluster using Kops and from a YAML file
  • I’ll build the cluster in an existing VPC (there’s some requirements for having our own VPC. Kops can create a VPC along a Kube cluster as well, which is quite cool, but this is not an options for us)
  • I already created automation for maintaining and managing AWS infrastructure using terraform.

At this point, the next step I need to do is to collect a bunch of ID’s generated by AWS after creating my base infrastructure and automatically populate the YAML file. Enter Ansible - the simplest way to automate apps and IT infrastructure. Don’t believe me? Just google ‘ansible’ and see for yourself. In this case, Ansible is used as a templating tool: I created a role that reads a bunch of variables populated by terraform outputs, applies a jinja2 template and crates a YAML file to be used by kops. Putting all of this together it looks similar to the bellow code snippets.

1) Terraform:

#!/bin/bash -ecd $dir
terraform apply -auto-approve
exit_code=$?
source $(git rev-parse --show-toplevel)/{{ scripts_path }}/terraform_output.yml.shexit $exit_code

Terrform output just runs: terraform output 2>/dev/null | grep “=” | sed ‘s/ =/:/g’ > terraform_output.yml . This creates a YAML file with required outputs (or variables)so it can be consumed by the Ansible role, similar to:

ami_name: ami-111111
kms_key_arn: arn:aws:kms:eu-west-1:111111111111111:key/11111111-410d-4e11-ad59-1111111111
route53_domain: kube.example.com
route53_zone_id: Z3HYBV888111
security_group_kube_master_id: sg-111111
subnet_kube_master_a_cidr_block: 10.0.6.0/24
subnet_kube_master_a_id: subnet-7ae15b32
subnet_public_kube_a_id: subnet-bbec56f3
subnet_public_kube_a_name: kube01-public_kube-a
subnet_public_kube_b_cidr_block: 10.0.1.0/24
subnet_public_kube_b_id: subnet-111111
vpc_cidr: 10.0.0.0/16
vpc_id: vpc-111111
vpc_name: kube01
vpc_parent_dns_domain: example.com
vpc_region: eu-west-1

2) Ansible:

Role snippet:

- name: Inclue terraform_output
include_vars:
file: '{{ terraform_output_file }}'
- name: Create {{ cluster_file }}
template: src='cluster.yml.j2'
dest='{{ cluster_dir }}/{{ cluster_file }}'
mode=0640

Jinja2 template snippet:

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: {{ cluster_name }}
spec:
api:
loadBalancer:
type: {{ loadBalancer_type }}
authorization:
alwaysAllow: {}
channel: stable
cloudLabels:
Service: kubernetes
VPC: {{ vpc_name }}
cloudProvider: aws
configBase: s3://{{ s3_kube_bucket_name }}/{{ cluster_name }}
dnsZone: {{ cluster_name }}
..................................................................
{% for item in node_groups %}
---{% for item in node_groups %}
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: "{{ cluster_name }}"
name: {{ item.name }}
spec:
additionalSecurityGroups:
- {{ security_group_kube_node_id }}
{% if item.node_labels is defined %}
nodeLabels:
{% for label in item.node_labels %}
{{ label.name }}: "{{ label.value }}"
{% endfor %}
..................................................................

Playbook:

---- name : build k8s cluster.yamlhosts       : localhost
gather_facts: false
connection : local
roles:
- kops_template
vars:
terraform_output_file: "terraform_output.yml"
cluster_dir: "/kops"
cluster_file: "{{ cluster_name }}.yaml"
master_count: 1
kubernetesVersion: 1.9.6
loadBalancer_type: "Public"
node_groups:
- name: default-node
node_labels:
- name: node_type
value: default
machineType: m4.large
maxSize: 1
minSize: 1
maxPrice: 0.111

Basically this playbook is the most used file in terms of cluster configs. It has entires for the things that I need to manage most of the time. All else is defined in the role defaults or the Jinja2 template.

3) Kops:

kops create -f kops.yaml --name=myfirstcluster.example.com --state=s3://kube-11111111kops create secret sshpublickey admin -i ~/.ssh/id_rsa.pub \
--name myfirstcluster.example.com --state s3://kube-11111111
kops update cluster --name myfirstcluster.example.com --state s3://kube-11111111kops update cluster --name myfirstcluster.example.com --state s3://kube-11111111 --yes

And that’s it! Press enter after typingkops update cluster --yes and the cluster will be created.

Now I have: terraform managing the base infrastructure, ansible creating a YAML file by applying a template, and kops building the cluster. All these align nicely, just like lego bricks.

I used these building blocks for our Kube clusters with the goal to have a consistent, repeatable and reliable process. As an added bonus, it gives excellent DR of the compute part of our infrastructure, it’s all maintained in code and makes future upgrades or cluster patches a breeze.

Wealth Wizards Engineering

The place where we blog about the cool stuff that's going on in Wealth Wizards Engineering and where we showcase our public APIs.

Ionut Craciunescu

Written by

Wealth Wizards Engineering

The place where we blog about the cool stuff that's going on in Wealth Wizards Engineering and where we showcase our public APIs.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade