Launching a Kubernetes cluster into an existing VPC in AWS using Kops + Terraform

“Kops is made on location with the men and women of devops…”

We’ve all seen the articles on how to build a Kubernetes cluster using kops. We’ve even seen the fancier ones that use the kops + Terraform combo. But what about if you want to launch your cluster into an existing VPC using kops + Terraform?

Well I’m here to guide you through that process. It won’t be that painful, I promise

#ButFirst …. Prerequisites:

  • kops (You will need to install kops before proceeding)
  • kubectl
$ brew install kops
$ brew install kubectl

Non MacOSX users:

kops is also dependant on your AWS credentials, so lets make sure thats setup:

$ export AWS_ACCESS_KEY_ID="<your_access_key>"
$ export AWS_SECRET_ACCESS_KEY="<your_secret_key>"
$ export AWS_DEFAULT_REGION="<aws_region>"

The next step is to make sure that your VPC and subnets are tagged appropriately.

If your current VPC is already terraform’d then add the tag like so:

tags = {
"KubernetesCluster" = "k8s.mydomain.com"
}

Make sure thats done on the VPC itself as well as all the subnets. If your current ENV is not terraform’d then either manually add the AWS tags with

Key: “KubernetesCluster” and Value: “k8s.mydomain.com”

or do it in some other automated fashion.

Next were going to create an S3 bucket where kops can store its state.

resource "aws_s3_bucket" "state_store" {
bucket = "k8s-mydomain-state"
acl = "private"
force_destroy = true
versioning {
enabled = true
}
}

Now we need to setup some ENV vars that kops will use.

export KOPS_CLUSTER_NAME="k8s.mydomain.com"
export KOPS_STATE_STORE="s3://k8s-mydomain-state"

KOPS_CLUSTER_NAME. This is the name of your cluster and should match the tags that you added to the VPC and all the subnets (You did rememeber to do that didn’t you?, if not scroll up). Its very important.

KOPS_STATE_STORE is the s3 bucket where kops will store its config (the yamls, not the terraform files)

Next were going to use kops to generate the terraform code that we will use to build our cluster. ( The networking mode is up to you, you can use whatever you please. Valid options include kopeio-vxlan, flannel, weave, calico, canal, etc… Chosing one is out of the scope of this article.

For the sake of example were going to assume that you have subnets in:

  • us-east-1a
  • us-east-1b
  • us-east-1c
kops create cluster \
--cloud=aws \
--master-zones us-east-1a,us-east-1b,us-east-1c\
--zones us-east-1a,us-east-1b,us-east-1c\
--topology private \
--dns-zone mydomain.com \
--networking weave \
--vpc <your_vpc_id> \
--target=terraform \
--out=. \

At this point kops shouldve spit out a kubernetes.tf file in the current directory.

DO NOT RUN terraform plan / apply JUST YET!!!

The reason is kops will attempt to create new subnets, routes, route tables and NAT gateways and thats not what we want. We want to tell kops to use existing subnets.

$ kops edit cluster

This should bring up a yaml file in $EDITOR. Look for the subnets section. It should look something like this below

There will be one public subnet per AZ (which kops calls “utility”) and one private subnet per AZ.

We will be replacing these values with our existing subnet IDs for the AZ that it is in. For the private subnets you will need to also specify the NAT gateway ID for that subnet as a key called “egress”. See below:

Once you save and quit it will update the yml config thats stored in KOPS_STATE_STORE.

We now need to run kops to generate new TF code using the updated yaml configs that we just modified.

kops update cluster --out=. --target=terraform

IMPORTANT NOTE (Pay Attention!!):

(assuming your infrastructure is already terraform’d, if its not then you can ignore this part)

Because kops generates terraform dynamically, included in that will be a duplicate aws provider in its resulting kubernetes.tf . Causing the following error if you were to $ terraform apply

Error: provider.aws: multiple configurations present; only one configuration is allowed per provider

You will need to create a file called: overrides.tf and make sure its in the same directory as kubernetes.tf.

contents of: overrides.tf (your region may differ):

provider "aws" {
region = "us-east-1"
}

And then remove any ‘provider “aws”’ thats been previously defined in your existing TF code. Leave the one thats defined in kubernetes.tf as you should never be hand modifying that file as its ouput is always re-written by kops.

Now we can deploy the cluster

$ terraform plan
$ terraform apply

If all went well you should see:

$ kubectl get nodes
└─[0] <> kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-10-112-176.ec2.internal Ready master 10m v1.10.3
ip-10-10-117-46.ec2.internal Ready node 8m v1.10.3
ip-10-10-41-79.ec2.internal Ready node 7m v1.10.3
ip-10-10-50-107.ec2.internal Ready master 10m v1.10.3
ip-10-10-80-185.ec2.internal Ready master 9m v1.10.3

Congratulations! You have now successfully launched a kube cluster into an existing VPC