Pitfalls to avoid when using Spot VMs in GKE for Cost reduction

Kishore Jagannath
Google Cloud - Community
5 min readFeb 19, 2024

Introduction

Google Kubernetes Engine (GKE), provides both on-demand and spot VMs to run user workloads(pods, deployments, daemon sets). Spot VMs provide a cost-effective way to run certain types of workloads within your Kubernetes cluster. Spot VMs are basically unused Compute Engine instance within a cloud provider datacenter that are available at a huge discount. Spot instances can reduce your Compute Engine usage costs by up to 91%. Though Spot VMs are available at substantially reduced cost, they introduce two challenges

  1. Spot VMs can be preempted anytime by GCP. A running spot VM can be taken back by GCP, hence the pods have to be rescheduled to a different node

2. Spot VM availability is not guaranteed in a specific GCP region.This essentially means that a request for a new spot vm creation may not succeed.

To counter these challenges, your applications need to be fault tolerant and withstand possible spot instance preemptions. Compute engine sends a termination request and gives upto 30 seconds to handle the termination notice before terminating the VM.

Also if the spot vms are unavailable because of high demand in a specific region or zone, the business continuity should not be affected. This article will provide you with an overview of how to handle the unavailability of Spot VMs in a GKE environment and ensure that you are able to achieve business continuity while saving costs.

Spot VMs in GKE

GKE clusters enable creation of node pools to manage and scale nodes with similar configurations. To use Spot VMs in GKE , add a new node pool enable “use Spot VMs checkbox” in the node pool. For details please refer. GKE automatically adds both the cloud.google.com/gke-spot=true and cloud.google.com/gkeprovisioning=spot (for nodes running GKE version 1.25.5-gke.2500 or later) labels to nodes that use Spot VMs.

Below code snippet shows how we can assigns the pods to spot nodes based on nodeSelector and nodeAffinity of the Pod.

apiVersion: v1

kind: Pod

spec:

nodeSelector:

cloud.google.com/gke-spot:”true”

apiVersion: v1

kind: Pod

spec:

affinity:

nodeAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

nodeSelectorTerms:

- matchExpressions:

- key: cloud.google.com/gke-spot

operator: In

values:

- “true”

The above nodeAffinity and nodeSelector yaml targets spot VMs based on cloud.google.com/gke-spot . GKE ensures that all nodes within the spot vm node pools will contain this node label. For Detailed GKE Documentation Refer:

Spot VM Use cases

Since the availability of Spot VMs in a specific GCP region is not guaranteed, the below use cases illustrate on techniques to counter this challenge with least disruptions.

Deploy Workloads on both Spot and On-Demand

Users may prefer to deploy a specific percentage of their pod replicas to spot and specific percentage to On Demand. With this, a portion of the pods run without disruption in On demand while still utilising the cost savings provided by Spot VMs. This option is suitable for low priority production workloads which can withstand a percentage of pods stopped and being rescheduled to on-demand nodes.

This can be achieved with the “preferredDuringSchedulingIgnoredDuringExecution:” option in Kubernetes Node Affinity. To schedule 50% of the deployment in spot and 50% in a standard node pool, specify

spec:

affinity:

nodeAffinity:

preferredDuringSchedulingIgnoredDuringExecution:

- preference:

matchExpressions:

- key: cloud.google.com/gke-spot

operator: In

values:

- true

weight: 5

- preference:

matchExpressions:

- key: cloud.google.com/gke-some-on-demand-nodepool

operator: In

values:

- pool-1

weight: 5

containers:

- image: nginx:latest

imagePullPolicy: Always

name: nginx-1

In the above deployments yaml, we use node Affinity with preferredDuringSchedulingIgnoredDuringExecution and we also specify the weights for each criteria. The first preference matches all spot VMs as spot VMs in GKE obtain the label cloud.google.com/gke-spot:true by default. The second preference matches the standard node pool by its node label.

In the example above the replicas of deployments will be equally distributed between spot and standard node pools as they carry equal weight. The weights can be changed to a different value. For e.g to prefer allocation in spot instead of standard increase the weight of spot to 6 and standard to 4. For Details Refer

When spot VMs are unavailable, any new pods that are scheduled or the existing pods that are rescheduled from spot are automatically assigned to on-demand instances, thus providing business-continuity.

In the weight allocation, it is important to note that we are specifying a preference to Kubernetes scheduler and kubernetes itself does not guarantee the exact percentage based on the weights. Hence its suggested to experiment and try out the weights before using them in a production environment. For Details on Node selection in Kubernetes Refer Kubernetes Documentation

Spot VMs availability cannot be guaranteed, however GCP provides a 30 sec termination notice before preempting a Spot VM. For details on handling Spot VM termination Refer

Deploy in Standard when Spot is Unavailable

This technique can be used where saving costs is the utmost priority and where services can withstand a small downtime. This can be utilised in Dev or QA environment, which can withstand a small downtime when a pod is shutdown and rescheduled.

In this configuration, the pods gets scheduled always on a Spot node and when a spot node is unavailable because of high demand, the pod gets scheduled on an on-demand instance. Notice that there are no weights specified, just an additional preference (in addition to existing node and pod affinity) to deploy on spot VMs.

spec:

affinity:

nodeAffinity:

preferredDuringSchedulingIgnoredDuringExecution:

- preference:

matchExpressions:

- key: cloud.google.com/gke-spot

operator: In

values:

- true

containers:

- image: nginx:latest

imagePullPolicy: Always

name: nginx-1

The above configuration specifies a preference for spot VMs without any weights. The Kubernetes scheduler will try to honour this and schedule the pod in a spot vm if available. if not available a suitable “on-demand” node pool instance is chosen.

Conclusion

While spot VMs are a great option for saving cost, it is essential to plan for preemption and unavailability of Spot VMs. This ensures that business continuity is not affected. Pods that are scheduled to on-demand instance because of spot unavailability at a certain point of time continue to run there even if spot VMs are available later. For this one simple technique is to perform a rolling restart of all “suitable pods running in on-demand” during off peak hours to regularly reschedule these pods to an available spot VM.This will ensure that pods are scheduled and distributed based on the specified preferences and weights in Kubernetes declaration.

--

--

Kishore Jagannath
Google Cloud - Community

I am a strategic cloud engineer in Google and passionate about sharing my knowledge.