Upgrades!!! — Everything New With Kubernetes 1.31

Imran Roshan
Google Cloud - Community
6 min readAug 8, 2024

Enhanced and Ready to go with the new updates on K8S

Numerous innovations intended to improve traffic distribution, resource management, and security are introduced by Kubernetes. Users should expect a more reliable and efficient Kubernetes experience with this update.

Notable Features

Enhanced Security

AppArmor Support: Protect your apps from potential vulnerabilities by enforcing security profiles for pods and containers.

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app-container
image: my-app-image
securityContext:
apparmorProfile: "restricted" # Or a custom profile name

PodDisruptionBudget (PDB) with Unhealthy Pod Eviction Policy: With this policy, users can maintain pod availability while defining actions for healthy but unready pods. It offers more precise control over the procedures used to evict pods.

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: my-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: my-app
unhealthyPodEvictionPolicy: "Condition" # New in 1.31

Resource Management

Pod-Level Resource Limits: Give more precise control over the distribution of resources by defining ceilings on resource use for certain pods.

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
resources:
limits:
cpu: "2"
memory: "2Gi"
limits:
cpu: "4"
memory: "4Gi"

In this example, the my-container has resource limits defined at the container level, while the pod itself has additional resource limits. The pod's total resource consumption cannot exceed the specified limits, regardless of individual container usage.

Traffic Management

Multiple Service CIDRs: To enable load balancing over a larger IP address range, assign a service several CIDR blocks. Leverage a new Service field to configure traffic splitting among multiple backends or implement advanced routing strategies.

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
clusterIP: None # For load balancer mode
loadBalancerIP: 10.0.0.1 # Optional static IP
serviceCidrBlock: "10.10.0.0/16,10.20.0.0/16" # New in 1.31
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
trafficPolicy: "Local" # Optional; new in 1.31
# Define traffic routing rules here

Service trafficDistribution field

The spec.trafficDistribution field in a Kubernetes Service is introduced in alpha by Kubernetes v1.30. By doing this, you can provide your preferences for the traffic routing to Service endpoints. While traffic distribution lets you express preferences (such routing to topologically closer endpoints), traffic policies place more emphasis on stringent semantic guarantees. This can aid in cost, reliability, or performance optimization. Enabling the ServiceTrafficDistribution feature gate for your cluster and all of its nodes will allow you to utilize this field. The following field value is supported in Kubernetes v1.30:

PreferClose: Expresses a desire for traffic to be routed to endpoints that are close to the client topologically. Different implementations may interpret “topologically proximate” differently, and it may refer to endpoints that are part of the same node, rack, zone, or even region.

Scheduling hints for volumeRestriction plugin

It is possible to limit the kinds of volumes that can be used in your pods with the VolumeRestriction plugin. The VolumeRestriction plugin scheduling hints are now supported by the Kube-scheduler in Kubernetes 1.31 thanks to improvements made to it. This implies that you can now give the scheduler suggestions regarding the kinds of volumes that a pod needs. The scheduler will then attempt to schedule the pod on a node that has the relevant volume types available using these suggestions.

The pod is asking for a persistent volume claim in this instance. This hint will be used by the scheduler to attempt scheduling the pod on a node that has available persistent volume claims.

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
schedulerName: default-scheduler
# Hint that the pod requires a persistent volume claim
volumeSchedulingHints:
- type: PersistentVolumeClaim

Randomized Pod Selection Algortihm

ReplicaSets employed a deterministic method in earlier version of Kubernetes to choose which pods to remove during downscaling. Pod affinity problems resulted from this frequently, with certain nodes having disproportionately fewer pods than others. This may eventually affect resource usage and cluster performance.

A randomized technique was added in Kubernetes 1.31 to choose nodes from a pool of qualified applicants. By doing this, pod affinity problems are avoided, and cluster utilization is enhanced overall.

Even though the Kubernetes controller manager implements the randomized pod selection process, it helps to visualize the idea with a more basic example. Here is a simulation of randomized pod selection using Python:

import random
def random_pod_selection(pods):
"""Selects a random pod for termination from a list of pods.
Args:
pods: A list of pod objects.
Returns:
The selected pod.
"""
if not pods:
return None
return random.choice(pods)

Persistent Volume Reclaim Policy

What happens to a PV when its associated PVC is erased is determined by the reclaim policy. The reclaim policy was improved in Kubernetes 1.31.
There are the ensuing reclamation policies available:

Retain: After the PVC is erased, the PV is kept.
Recycle: Prior to being utilized again, the PV is cleaned up and recycled.
Delete: The PV is eliminated following the deletion of the PVC.
The reclaim policy saw enhancements with Kubernetes 1.31, which included:

Finalizers: Finalizers can be added to PVs to keep them from being deleted until a set of requirements are satisfied.
Reclaim policy validation: To avoid mistakes, reclaim policies must undergo stricter validation.

apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /export/path
server: 192.168.1.10
  • Persistent Volume Last Phase Transition Time: To help with debugging and lifecycle management, keep track of how long a Persistent Volume spends in its last phase (such as Released).
  • Jobs with Retriable and Non-Retriable Pod Failures: Identify recoverable and non-retriable pod failures within jobs to facilitate smarter job management.
  • Elastic Indexed Jobs: Jobs that are elastically indexed can be scaled up or down in response to an external index, allowing for dynamic workload modifications.
  • Enhanced Ingress Connectivity: with Kube-Proxy Enhance the dependability of Ingress connections by means of better kube-proxy management.
  • Consider Terminating Pods Deployment: Deployments now include a new annotation that lets you indicate when a pod might terminate due to scale.
  • Declarative Node Maintenance: By using Node objects to manage planned node maintenance declaratively, the procedure can be made simpler.

Deprecations

KubeProxy Requirements

Kubernetes 1.31 onwards necessitates kernel 5.13 or later and nft version 1.0.1 or later for kube-proxy to function. Kube-proxy configures Netfilter for network traffic management using the nft command-line utility. Because kube-proxy requires some new features introduced in Kernel 5.13, the kernel is necessary. Before you may install Kubernetes 1.31, you must update to a supported version of either the kernel or kube-proxy if you are running an older version.

Here, let me help you ;)

# Upgrade nft command-line tool
apt-get update && apt-get install nft -y

# Upgrade kernel
uname -r # Check current kernel version
# Download and install the latest kernel image
wget https://cdn.kernel.org/v5.13/linux-5.13.19.tar.xz
tar -xvf linux-5.13.19.tar.xz
cd linux-5.13.19
make install
# Update grub to boot the new kernel
update-grub

Version field — In Kubernetes v1.31, the.status.nodeInfo.kubeProxyVersion field of Nodes is deprecated and will be deleted in a subsequent version. The reason for this deprecation is because this field’s value was inaccurate and continues to remain so. This field is set by the kubelet, and thus lacks trustworthy information regarding the version of kube-proxy or even whether it is currently operating. In v1.31, the kubelet will no longer try to set the.status.kubeProxyVersion field for its associated Node, and the DisableNodeKubeProxyVersion feature gate will be set to true by default.

In-Tree Cloud Provider Code Removal

The elimination of in-tree cloud provider code was one of Kubernetes 1.31’s biggest modifications. The purpose of this change was to encourage the growth of external cloud providers and make Kubernetes more vendor-neutral.

apiVersion: cloudprovider.k8s.io/v1beta1
kind: CloudConfig
metadata:
name: config
spec:
controllerManagerConfig:
aws:
region: us-west-2

Make sure to

  • Identify the deprecated API you are currently using.
  • Find the recommended replacement API in the Kubernetes documentation.
  • Update your code to use the replacement API.
  • Test your changes thoroughly.

For more, visit the Kubernetes Deprecation Guide

To Conclude

You should only use this blog post as a jumping off point to explore the Kubernetes 1.31 changelog. To find out more about every change in this release, I highly recommend reading the entire changelog. Go ahead visit the changelog for even more:

Connect with me

--

--

Imran Roshan
Google Cloud - Community

Your security sherpa | Google Developer Expert (GCP) | Ethical Hacker | Cloud Security