Upgrades!!! — Everything new with Kubernetes 1.30

Imran Roshan
Google Cloud - Community
5 min readMar 28, 2024

--

New features, enhancements and everything exciting with Kubernetes 1.30

Excited? Aren’t we all? A slew of innovative features aimed at enhancing security, simplifying pod management, and empowering developers are included in this version. Now let’s explore the main features that take Kubernetes 1.30 to the next level.

Enhanced Security Again

With the introduction of various improvements, Kubernetes 1.30 further establishes itself as a safe platform for workload deployment and management.

User namespaces for greater pod isolation [beta]

This ground-breaking feature gives users within pods fine-grained control over their identities; it will graduate to beta in 1.30. It permits mapping the various values on the host system to the UIDs (User IDs) and GIDs (Group IDs) used inside a pod. By drastically lowering the attack surface, this isolation method makes it more difficult for compromised containers to abuse privileges on the underlying host.

apiVersion: v1
kind: Pod
metadata:
name: my-secure-pod
spec:
securityContext:
userNamespace: true
containers:
- name: my-app
image: my-secure-image:latest

To effectively isolate the container from other processes on the host, the Kubelet is instructed to run it with a unique user namespace in this example by setting the userNamespace: true property within the securityContext.

Bound service account tokens [beta]

For service account authentication, bound service account tokens (SATs) provide a more secure option than conventional, non-bound tokens. Bound SATs, first released in 1.30 as beta, are associated with particular pods and only provide access to the resources needed by those pods. As a result, the potential damage is minimized by reducing the blast radius of compromised tokens.

apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-bound-sat
spec:
serviceAccountName: my-service-account
template:
spec:
securityContext:
# Enables the use of the bound service account token
podSecurityContext: {}

The pod can utilize the bound service account token linked to the designated service account (my-service-account) by incorporating the podSecurityContext: {} section.

Node log queries

Understanding node logs is essential for security analysis and troubleshooting. With the beta release of Node Log Query in Kubernetes 1.30, administrators can use the kubelet API to directly query system service logs on nodes. This reduces the attack surface and expedites log collection without requiring additional system access, thereby improving security.

Imagine running the following command to search logs for kubelet process-related errors:

kubectl get --raw "/api/v1/nodes/worker/proxy/logs/?query=kubelet&pattern=error"

With this command, logs from the kubelet process running on the “worker” node that specifically contain the keyword “error” are retrieved.

AppArmor profile configurations using Pod Security Contexts

Within containers, AppArmor profiles offer a potent way to enforce application security policies. By enabling administrators to specify profiles directly within the PodSecurityContext and container.securityContext fields, Kubernetes 1.30 streamlines the configuration of AppArmor. As a result, policy management is simplified and beta AppArmor annotations are no longer required.

apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-apparmor
spec:
securityContext:
apparmorProfile: "restricted-runtime"
containers:
- name: my-app
image: my-app-image:latest
securityContext:
apparmorProfile: "runtime/default"

Here, the container called “my-app” uses the “runtime/default” profile, but the pod itself is assigned the “restricted-runtime” profile. This provides both pod and container level granular control over AppArmor policies.

Enhanced Pod Management

Node Memory Swap

Node memory swapping is now supported in Kubernetes 1.30. This may enhance system stability when memory pressure is applied by enabling the kernel to use swap space on nodes for memory management.

In Kubernetes 1.30, the node memory swap feature has been redesigned to prioritize stability while providing more control. With the introduction of LimitedSwap in place of UnlimitedSwap, Kubernetes offers a more controlled and predictable method for handling swap usage on Linux nodes. Don’t forget to assess your unique requirements prior to activating swap and to put appropriate monitoring procedures in place.

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
# ... other kubelet configurations
featureGates:
NodeSwap: "true"
memorySwap:
swapBehavior: LimitedSwap

Container resource based pod autoscaling

By using this feature, horizontal pod autoscaling (HPA) based on memory or CPU metrics of the container is enabled. This makes it possible to scale more precisely depending on the real needs for containers. You can make the most of your Kubernetes clusters’ resource allocation and scaling strategy by concentrating on the metrics of individual containers.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
containerMetrics:
- name: web-container # Target container within the Pod

During the deployment, the HPA keeps an eye on how much CPU resource each pod is using. The HPA will scale the deployment to maintain an average CPU usage of 80% across all instances of the web container since the average utilization is set to 80. The container name (web-container) for which the CPU metric is to be monitored is specified in the containerMetrics section.

Dynamic resource allocation

Structured parameters increase the flexibility of resource allocation for pods. By defining resource requests and limits more precisely, developers can optimize the use of available resources.

In this case, the pod makes a minimum and maximum request for one GPU resource (nvidia.com/gpu [invalid URL removed]). It also uses the standard memory resource definition to request 8GB of memory.

apiVersion: v1
kind: Pod
metadata:
name: my-gpu-app
spec:
containers:
- name: gpu-container
resources:
requests:
resource.k8s.io/nvidia.com/gpu:
type: Resource
minimum: 1
maximum: 1
resource.k8s.io/memory:
type: Resource
requests:
memory: "8Gi"

DRA in Kubernetes 1.30 opens the door to a more dynamic and effective resource management environment with its structured parameters. As the feature develops, we should anticipate a broader audience and the emergence of a vibrant third-party resource driver ecosystem that meets a variety of application requirements.

To Conclude

Now, obviously I am not part of the AI fleet to write down every single one of the feature parameters in details so I would redirect you now to the best thing to exist after Ice Cream. THE DOCUMENTATION!

Connect with me?

--

--

Imran Roshan
Google Cloud - Community

Your security sherpa | Google Developer Expert (GCP) | Ethical Hacker | Cloud Security