Kubernetes v1.30 — Quick Guide

Ahmad Mohey
5 min readApr 19, 2024

Kubernetes version v1.30 is now available which delivers a compelling set of 45 enhancements categorized as:

  • Stable (17 features): Ready for production use, these features enhance stability, security, and pod management.
  • Beta (18 features): Currently under testing and enhancement, these features offer promising functionalities like improved logging and traffic distribution.
  • Alpha (10 features): Early-stage features with great potential, like faster security labeling and job completion controls.

The name “Uwubernetes” combines “Kubernetes” with “UwU,” which is an emoticon symbolizing happiness and cuteness.

Stable Improvements :

  • Automatic Volume Recovery After Restart : Kubernetes can now automatically recover volume information after a kubelet restart or machine reboot. This eliminates the need for manual intervention and ensures volumes are properly cleaned up, improving overall reliability.
  • Enhanced Volume Restore Security : Unauthorized changes to volume modes during snapshot restoration are now prevented. This strengthens data security by requiring explicit permissions from cluster administrators for any volume mode modifications during restore operations.
  • Pod Scheduling Readiness : Pod scheduling readiness addresses the challenge of efficient resource utilization by preventing the scheduling of Pods until the necessary resources are available. Previously in beta since Kubernetes v1.27, this feature provides customizable controls, allowing users to implement quota mechanisms and security policies. By delaying scheduling until resources are ready, unnecessary workload on the scheduler is reduced, potentially leading to cost savings, especially in conjunction with cluster autoscaling. The integration of scheduling gates into the Kubernetes API definition for Pods offers users greater flexibility in managing workload scheduling.
  • Min Domains in PodTopologySpread : users now have the ability to define the minimum number of domains required for spreading Pods. This enhancement ensures optimal workload distribution and resilience, particularly beneficial when utilized alongside Cluster Autoscaler. Previously, insufficient domains would lead to unschedulable Pods. With this feature, Kubernetes dynamically provisions nodes in new domains as needed, improving resource utilization and workload placement efficiency.
  • Go Workspaces : The project has transitioned to using Go workspaces in its repository. This means that the way developers organize and manage their code has changed. While this transition primarily affects developers working on projects that use Kubernetes code, end users won’t notice any difference. The goal of this change is to make development processes more streamlined and maintainable. However, developers should be aware that they may need to make adjustments to their code-generation tools due to this transition.

Beta Improvements :

  • Node Log Query : initially introduced in v1.27, has progressed to beta status. This feature facilitates debugging by allowing users to retrieve logs of services running on nodes. To utilize this feature, ensure that the NodeLogQuery feature gate is enabled, and configure kubelet with enableSystemLogHandler and enableSystemLogQuery set to true. This feature assumes that on Linux, service logs are accessible via journald, while on Windows, they are available in the application log provider. Logs can also be accessed by reading files within /var/log/ (Linux) or C:\var\log\ (Windows).
  • CRD Validation Ratcheting : is like a safety net for managing CustomResourceDefinitions (CRDs). You can make changes to CRDs without worrying too much about breaking things. It works by allowing updates to CRDs even if they temporarily don’t follow all the validation rules. However, these updates can only be accepted if the parts of the resource that failed validation haven’t changed during the update. This means you can confidently add new validation rules to your CRDs without causing issues or needing to change the version of the object. It’s like having a flexible framework that ensures your CRDs stay reliable while still allowing for updates and improvements.
  • Contextual Logging : adding extra tags or labels to your logs to make them more informative and easier to understand. With this feature, developers and operators can inject additional details into logs, such as service names or transaction IDs. This makes it much simpler to figure out what’s happening in your Kubernetes environment, especially when dealing with complex distributed systems. By providing clearer insights into what’s going on behind the scenes, Contextual Logging makes troubleshooting and problem-solving a lot easier.

Alpha Improvements :

  • Job success/completion Policy : Introduces the .spec.successPolicy field for indexed Jobs, enabling users to determine when a Job is considered successful based on its associated Pods. This feature offers two criteria options: succeededIndexes, where specific indexes’ success determines Job success, even if others fail, and succeededCount, which considers the Job successful when a specified number of indexes succeed. Once the defined success policy is met, the Job controller terminates any remaining Pods linked to it. This enhancement grants users greater flexibility in defining success conditions, thereby improving control over Job completion behavior.
  • Recursive Read-only (RRO) mounts : This feature adds an extra layer of security for your data by allowing you to set volumes and their submounts as read-only. Essentially, it prevents accidental changes to your data. Think of it as locking your files in a way that they can’t be edited. This is particularly important for critical applications where maintaining data integrity is vital. RRO Mounts ensure that your data remains unchanged, adding an additional safeguard to your cluster’s security. This feature is especially valuable in highly controlled environments where even the smallest alteration can cause serious problems.
  • Traffic distribution for services : This new feature adds the spec.trafficDistribution field to Kubernetes Services. This field allows users to specify preferences for routing traffic to Service endpoints, enabling optimization for factors such as performance, cost, or reliability. Unlike traffic policies, which focus on strict semantic guarantees, traffic distribution allows for expressing preferences, such as routing to topologically closer endpoints. One option you can choose for this feature is called PreferClose. When you set PreferClose, it tells Kubernetes to prioritize routing traffic to endpoints that are closer to the client in terms of network distance. This “closeness” can mean endpoints within the same node, rack, zone, or even region, depending on how Kubernetes is set up. By choosing PreferClose, you’re allowing Kubernetes to focus on proximity rather than spreading the traffic evenly across endpoints.

Kubernetes 1.30 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials or run local Kubernetes clusters using minikube. You can also easily install 1.30 using kubeadm.

You can visit the link below for more information regarding V1.30

on X platform

https://twitter.com/kubernetesio

https://twitter.com/PoseidonCode

--

--