Launching a Kubernetes cluster into an existing VPC in AWS using Kops + Terraform

“Kops is made on location with the men and women of devops…”

We’ve all seen the articles on how to build a Kubernetes cluster using kops. We’ve even seen the fancier ones that use the kops + Terraform combo. But what about if you want to launch your cluster into an existing VPC using kops + Terraform?

Well I’m here to guide you through that process. It won’t be that painful, I promise

#ButFirst …. Prerequisites:

  • kops (You will need to install kops before proceeding)
  • kubectl

Non MacOSX users:

kops is also dependant on your AWS credentials, so lets make sure thats setup:

The next step is to make sure that your VPC and subnets are tagged appropriately.

If your current VPC is already terraform’d then add the tag like so:

Make sure thats done on the VPC itself as well as all the subnets. If your current ENV is not terraform’d then either manually add the AWS tags with

or do it in some other automated fashion.

Next were going to create an S3 bucket where kops can store its state.

Now we need to setup some ENV vars that kops will use.

KOPS_CLUSTER_NAME. This is the name of your cluster and should match the tags that you added to the VPC and all the subnets (You did rememeber to do that didn’t you?, if not scroll up). Its very important.

KOPS_STATE_STORE is the s3 bucket where kops will store its config (the yamls, not the terraform files)

Next were going to use kops to generate the terraform code that we will use to build our cluster. ( The networking mode is up to you, you can use whatever you please. Valid options include kopeio-vxlan, flannel, weave, calico, canal, etc… Chosing one is out of the scope of this article.

For the sake of example were going to assume that you have subnets in:

  • us-east-1a
  • us-east-1b
  • us-east-1c

At this point kops shouldve spit out a kubernetes.tf file in the current directory.

DO NOT RUN terraform plan / apply JUST YET!!!

The reason is kops will attempt to create new subnets, routes, route tables and NAT gateways and thats not what we want. We want to tell kops to use existing subnets.

This should bring up a yaml file in $EDITOR. Look for the subnets section. It should look something like this below

There will be one public subnet per AZ (which kops calls “utility”) and one private subnet per AZ.

We will be replacing these values with our existing subnet IDs for the AZ that it is in. For the private subnets you will need to also specify the NAT gateway ID for that subnet as a key called “egress”. See below:

Once you save and quit it will update the yml config thats stored in KOPS_STATE_STORE.

We now need to run kops to generate new TF code using the updated yaml configs that we just modified.

IMPORTANT NOTE (Pay Attention!!):

(assuming your infrastructure is already terraform’d, if its not then you can ignore this part)

Because kops generates terraform dynamically, included in that will be a duplicate aws provider in its resulting kubernetes.tf . Causing the following error if you were to $ terraform apply

You will need to create a file called: overrides.tf and make sure its in the same directory as kubernetes.tf.

contents of: overrides.tf (your region may differ):

And then remove any ‘provider “aws”’ thats been previously defined in your existing TF code. Leave the one thats defined in kubernetes.tf as you should never be hand modifying that file as its ouput is always re-written by kops.

Now we can deploy the cluster

If all went well you should see:

Congratulations! You have now successfully launched a kube cluster into an existing VPC

DevOps Janitor | Recovering SysAdmin | Kubernetes | Docker | Distributed Computing | (@while1eq1)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store