Using different kubectl versions with multiple Kubernetes clusters

When you are working with multiple Kubernetes clusters, it’s easy to mess up with contexts and run kubectl in the wrong cluster. You don’t want to contribute to the kubernetes-failure-stories, I guess.

Beyond that, Kubernetes has restrictions for versioning mismatch between the client (kubectl) and server (kubernetes master), so running commands in the right context does not mean running the right client version.

[…] a client should be skewed no more than one minor version from the master, but may lead the master by up to one minor version.

Fortunately, there are some useful tools out there to help, which combined with some scripting can become even better.

Test Scenario

Let’s imagine that you’re responsible for managing Kubernetes for three different customers, with two different Kubernetes versions, each customer with one or more clusters:

CUSTOMER_1
k8s version: v1.10.11
Clusters:
- development
- staging
- production
CUSTOMER_2
k8s version: v1.10.11
Clusters:
- prod
CUSTOMER_3
k8s version: v1.12.3
Clusters:
- test
- production

I’ll use Bash on Linux in the next sections demonstrations, but all the following tools has nice support to other shells/OSs.

Managing kubectl versions

You must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues.

We are gonna use the asdf version manager for this, which is extendable via plugins. Follow the official docs to install asdf on your system, then install the kubectl plugin with the desired binary versions:

asdf plugin-add kubectl
asdf install kubectl 1.10.11
asdf install kubectl 1.12.3

Now you can easily list the available versions and switch between them:

user@Notebook:~$ asdf list kubectl
1.10.11
1.12.3
user@Notebook:~$ asdf global kubectl 1.10.11
user@Notebook:~$ kubectl version --short --client
Client Version: v1.10.11
user@Notebook:~$ asdf global kubectl 1.12.3
user@Notebook:~$ kubectl version --short --client
Client Version: v1.12.3

If you use helm, there is an available asdf plugin too.

Handling multiple clusters

Kubernetes has a nice doc on how to accomplish this. I’ll not dig into how to define clusters, users and contexts. The tip here is:

Use a different kubeconfig file for each customer, and set the KUBECONFIGenvironment variable to switch between config files.

user@Notebook:~$ ll $HOME/.kube/
total 20
drwxr-xr-x 2 user user 4096 fev 12 20:20 ./
drwxr-xr-x 4 user user 4096 fev 12 20:19 ../
-rw — — — — 1 user user 230 fev 12 20:19 config_CUSTOMER_1
-rw — — — — 1 user user 136 fev 12 20:20 config_CUSTOMER_2
-rw — — — — 1 user user 181 fev 12 20:20 config_CUSTOMER_3
user@Notebook:~$ export KUBECONFIG=$HOME/.kube/config_CUSTOMER_1
user@Notebook:~$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* development development admin
production production admin
staging staging admin
user@Notebook:~$ export KUBECONFIG=$HOME/.kube/config_CUSTOMER_2
user@Notebook:~$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* prod prod admin
user@Notebook:~$ export KUBECONFIG=$HOME/.kube/config_CUSTOMER_3
user@Notebook:~$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
production production admin
* test test admin

Keep track of your current context

The Kubernetes client has a built-in kubectl config current-context command, but it is painful to check this every time before running a further kubectl action.
Thankfully, we have the kube-ps1 to display the current kubectl context/namespace in the prompt string for us.

To install on Bash/Linux, I prefer to clone the repo and move the script to $HOME/.local/bin:

git clone https://github.com/jonmosco/kube-ps1.git
mkdir -p $HOME/.local/bin/
mv kube-ps1/kube-ps1.sh $HOME/.local/bin/kube-ps1.sh

You’ll also need to source the script on your .bashrc later. There is an example in the next section.

Let’s see how it works:

# Current KUBECONFIG env
user@Notebook:~(development:default)$ printenv KUBECONFIG
/home/user/.kube/config_CUSTOMER_1
# Getting contexts
user@Notebook:~(development:default)$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* development development admin
production production admin
staging staging admin
# Changing context to "production"
user@Notebook:~(development:default)$ kubectl config use-context production
Switched to context “production”.
# Changing default namespace to "monitoring"
user@Notebook:~(production:default)$ kubectl config set-context $(kubectl config current-context) --namespace=monitoring
Context "production" modified.
# Changing kubeconfig file
user@Notebook:~(production:monitoring)$ export KUBECONFIG=$HOME/.kube/config_CUSTOMER_3
# Listing contexts
user@Notebook:~(test:default)$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
production production admin
* test test admin
# Hide kube-ps1 for this session if you want
user@Notebook:~(test:default)$ kubeoff
user@Notebook:~$

The PS1 changes every time you change the kubeconfig, context or default namespaces.

Joining it all together

Until here, we learned how:

  • Use asdf to manage kubectl binary versions
  • Handle multiple clusters using the kubeconfig file and KUBECONFIG env
  • Use kube-ps1 to show up the current context/namespace

Now we can use some aliases to change between each customer environment with just one command. Following is a .bashrc sample that will fit into our three environments from our Test Scenario.

This should work like this:

Using aliases for changing versions, config files, and contexts

That’s it. How do you manage your kubectl versions and clusters? Is it easier? Leave a response and let me know!