AWS EKS Kubernetes Versions Upgrade and Update Management

AWS EKS Kubernetes Versions Update and Upgrade Management

Nick Gibbon
Pareture
Published in
5 min readAug 23, 2021

--

To successfully manage an AWS EKS cluster over a period of time such that the platform can consistently deliver actual value you need to ensure smooth operation and enable ways to make improvements.

One element of this is to consistently manage upgrades, updates and patches. But to actually do this you need to have an inventory of what to manage, when to make updates and processes to enable all types of version change. This goes for all software projects. There are many different ways to solve for this but the first step is to truly understand what you need to control for.

AWS EKS is the AWS managed Kubernetes offering. It is a complex service and so it’s very different than just managing one single version if you were working on a more simple application.

The information and links are accurate as of the date posted.

AWS EKS Managed Service

The AWS EKS Managed Service is the managed service part of EKS. This version e.g 1.21 aligns with the upstream Kubernetes version. As of writing Kubernetes release every quarter or so. EKS is committed to picking up a version sometime relatively soon after release and at least 4 Kubernetes versions at a time. Kubernetes supports minor versions for approx 1 year and EKS commits to supporting for at least 14 months.

EKS Patch Versions

E.g eks.1 . Patch versions to the managed service are updated automatically by AWS and are always non-breaking.

AWS EKS Addons

AWS EKS Addons are day 1 workloads that are deployed automatically as part of the managed service into the kube-system namespace by default to ensure function of the EKS cluster.

As of now these are:

  • Amazon VPC CNI
  • CoreDNS
  • kube-proxy

Each one lives in specified AWS-owned ECR Repositories and needs to be version managed to correspond differently with a different EKS platform version.

AWS EKS Nodes

For compute you will need to use nodes AMIs that align with the EKS platform version. As the managed service receives patches then public EKS AMIs also release corresponding patch versions which you should seek to align with.

EKS optimized Amazon Linux AMIs

Default option.

Amazon EKS optimized Bottlerocket AMIs

A container specific OS which I think will be the future and become more and more engrained with the EKS ecosystem.

Self-built AMIs

You can build your own AMIs using information and code provided by AWS. If you are building your own AMIs to use with EKS then you will want to ensure you have a process to patch in the same way.

Fargate

You can also run workloads on AWS Fargate where you would not need to worry about this type of versioning. Fargate will comes with it’s own trade offs.

Day 2 Workloads

After you have a working EKS cluster and you have consistent processed for upgrade and update for all of the above then the real fun begins. There will be other workloads that you require to provide the level of quality and service that you want. Common examples include (but are certainly not limited to):

  • Ingress controllers
  • Admission controllers
  • CNI plugins
  • CSI Drivers
  • External DNS
  • Cert Manager
  • Metrics Server
  • Other monitoring workloads (Prometheus, Grafana etc.)

Here you need watch out for version compatibility with the Kubernetes platform and compatibility with each other! Primarily via:

  1. Helm Chart versions (or other versions of configuration management).
  2. Container versions.
  3. Kubernetes object apiVersions.

Integrations

It is also likely that you will deploy workloads which will enable some integration with services outside of the EKS cluster. Here it’s important that the versions of this software corresponds with the version of the external software. For example if you have an agent deployment that send data to an external system then you need to ensure that the protocols and data formats line up between source and destination .

User workloads

Once all of the above is managed then finally you get to the real business value which is your differentiated custom software!

When developing applications you will be deploying containers and kubernetes objects and you will need to ensure these workloads play nice with the platform versions and the internal libraries and packages are also maintained.

Other

Tools

For efficiency and reproducibility you will be using different tools to enable automation and configuration. Terraform or Helm for example. This software needs to be accounted for so it will work with the platforms it needs to and you can take advantage of new features.

AWS

AWS at large doesn’t have publicly visible versions for a lot of core services unless it is explicit about it as it is with EKS. AWS is generally very stable and any breaking changes will be announced via various communication channels in advance.

Versions all the way down

Of course these versions mentioned don’t cover all versions of all software involved in the mission of managing EKS clusters. Every AMI, container and application is made of lots of different versioned software which is important to be maintained at different levels. The level of versioning covered in this post is the primary conceptual level you will be working with when you are at this level of the stack operating these systems. That is the publicly declared versions and interfaces. Every component has different versioning schemes and release cadences.

  1. Managing this is not easy. Doing this well requires constant effort.
  2. Automate your testing at all levels of the stack. This is the best way to check for regressions with all of the different versions at play.

--

--

Nick Gibbon
Pareture

Software reliability engineer & manager in cloud infrastructure, platforms & tools.